Next Article in Journal
The Synergy of Renewable Energy and Desalination: An Overview of Current Practices and Future Directions
Previous Article in Journal
Predicting Wear Rate and Friction Coefficient of Li2Si2O5 Dental Ceramic Using Optimized Artificial Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Body–Machine Interface for Assistive Robot Control in Spinal Cord Injury: System Description and Preliminary Tests

by
Aurora Freccero
1,*,†,
Maddalena Feder
2,3,†,
Giorgio Grioli
2,3,
Manuel Giuseppe Catalano
2,
Antonino Massone
4,
Antonio Bicchi
2,3 and
Maura Casadio
1
1
Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genova, 16145 Genova, Italy
2
Soft Robotics for Human Cooperation and Rehabilitation, Istituto Italiano di Tecnologia, 16163 Genova, Italy
3
Deparment of Information Engineering and Centro di Ricerca “Enrico Piaggio”, University of Pisa, 56122 Pisa, Italy
4
S.C. Unità Spinale Unipolare, Santa Corona Hospital, ASL2 Savonese, 17027 Pietra Ligure, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2025, 15(4), 1792; https://doi.org/10.3390/app15041792
Submission received: 20 December 2024 / Revised: 3 February 2025 / Accepted: 7 February 2025 / Published: 10 February 2025
(This article belongs to the Special Issue Assistive Technology for Rehabilitation)

Abstract

:
Motor impairments, particularly spinal cord injuries, impact thousands of people each year, resulting in severe sensory and motor disabilities. Assistive technologies play a crucial role in supporting these individuals with activities of daily living. Among such technologies, body–machine interfaces (BoMIs) are particularly important, as they convert residual body movements into control signals for external robotic devices. The main challenge lies in developing versatile control interfaces that can adapt to the unique needs of individual users. This study aims to adapt for people with spinal cord injury a novel control framework designed to translate residual user movements into commands for the humanoid robot Alter-Ego. After testing and refining the control algorithm, we developed an experimental protocol to train users to control the robot in a simulated environment. A total of 12 unimpaired participants and two individuals affected by spinal cord injury participated in this study, which was designed to assess the system’s applicability and gather end-user feedback on its performance in assisting with daily tasks. Key metrics such as the system’s usability, accuracy, performance, and improvement metrics in navigation and reaching tasks were assessed. The results suggest that assistive robots can be effectively controlled using minimal residual movements. Furthermore, structured training sessions significantly enhance overall performance and improve the accuracy of the control algorithm across the selected tasks.

1. Introduction

Human disabilities affect millions of people worldwide and can result from congenital conditions, accidents, or aging. Among these, motor disorders specifically impair movement and sensory perception, posing significant challenges that require continuous research and innovation related to new technologies that can improve quality of life. Mobility, independence, and the ability to perform activities of daily living (ADLs) are critical factors that greatly influence the quality of life of individuals with motor impairments [1]. Among sensorimotor impairments, spinal cord injuries (SCIs) represent a significant disability, affecting 250,000 to 500,000 people per year [2]. These injuries affect signal transmission between the brain and the body, causing permanent changes in motor, sensory, and automatic functions below the injury level. SCIs are classified by injury level (cervical, thoracic, lumbar, sacral) and completeness (complete or incomplete). Higher injury levels are associated with more severe motor impairments, resulting in greater challenges in leading an independent life and controlling assistive devices designed to support activities of daily living. Although loss of mobility is the most apparent consequence, SCIs also cause critical complications that affect other body functions. Statistics indicate that 20–30% of people with SCI experience clinically significant depression (https://www.who.int/publications/i/item/9789241564588, accessed on 9 February 2025), highlighting the need for interdisciplinary care that addresses both physical and emotional well-being.
Due to their great impact, spinal cord injuries are a key focus in assistive robotics, whose objective is to improve independence and quality of life [3]. Human–machine interfaces (HMIs) are essential for integrating human cognitive and physical abilities with machine functionalities, facilitating communication between users and external systems [4]. A subset of these systems, known as body–machine interfaces (BoMIs), exploits residual human signals to generate control inputs for external devices such as prosthetic limbs, powered wheelchairs, robotic arms, manipulators, or displays. While BoMIs can rely on various human signals—such as muscle contractions, body movements, eye gaze, or tongue gestures—a particular subgroup, brain–computer interfaces (BCIs), focuses exclusively on brain signals. However, this study emphasizes BoMIs that exploit users’ residual body movements [5]. By mapping such motions to external devices, BoMIs can expand the functional capabilities of users to compensate for lost sensorimotor functions [6]. Body movements, coming from users’ residual active body parts, are the control inputs providing useful information about users’ intent, which can be integrated in the control loop with the great advantage of being a non-invasive solution. Despite promising advancements, several challenges and limitations are still present. A key issue is designing interfaces that are both intuitive and adaptable to different levels of impairment. Overcoming this issue is important to enhance the use of assistive devices and to reduce the abandonment rate. BoMIs employ several sensor technologies, including infrared cameras [7], inertial measurement units (IMUs) [8], EMG sensors [9], or hybrid combinations [10], to capture user’s measurements effectively.
In general, residual movements exploited by BoMIs are voluntary or involuntary controllable motions that remain in individuals with partial or severe motor impairments. When designing a BoMI, it is crucial to consider the ability to discriminate between voluntary and involuntary motions. For instance, Kirchner et al. [11] explain how the use of electroencephalographic signals (EEGs) in BMIs holds great potential as an indicator of voluntary movements. Indeed, the readiness potential (RP) is observed only before voluntary movements, not involuntary ones, and can therefore be used as a discriminator. As disadvantage, a daily use of EEGs can become tiresome for users. Other methods involve exploiting signal processing techniques, such as filters [12]. In individuals with motion disorders, including those with spinal cord injuries, the presence of involuntary movements can also be verified. One of the most common involuntary movements in SCI subjects is associated with sleep-related periodic leg movements [13]. Involuntary contractions are characterized by brief episodes of high intensity activity (e.g., muscle spasms) or by long-lasting motor unit activity at low firing rates. The spasms can affect both legs and hands [14].
Even in severe cases, the residual movements often exceed the number of variables needed to control devices such as powered wheelchairs or computers [15]. By leveraging these movements to control assistive devices, users can regain functional independence, also offering several advantages in assistive technology, as users rely on existing motor skills rather than learning entirely new control paradigms. This approach reduces cognitive load and accelerates the adoption of assistive devices, leading to lower abandonment rates. Additionally, residual motions can also be used in a rehabilitative context to fasten physical rehabilitation using the preserved muscles, promoting neuromuscular recovery, supporting muscle tone, and preventing atrophy [15].
Despite their potential, using residual movements presents challenges, particularly the variability in motor function among individuals. This necessitates personalized calibration and adaptive algorithms for optimal control. Additionally, muscle fatigue and signal degradation over time can affect the long-term usability of these systems [16].
This study investigates a specific control approach for assistive robotic devices, outlined in [17], which leverages the user’s residual motions. Although this control strategy has only recently been introduced and shows promise for individuals with various motor impairments, our focus is on adapting it for people with spinal cord injuries (SCIs). To achieve this, we first revisited the key aspects of the control algorithm and outline the modifications implemented to tailor it specifically to SCI users. The primary objective of our study is to conduct a statistical analysis involving 12 unimpaired subjects performing two tasks, in order to evaluate the system’s accuracy, usability, and time performance before and after targeted training sessions. As a secondary goal, we aim to test the same experimental protocol in a pilot study with two SCI subjects. This will provide a broader perspective on the system’s functionality with the intended population and assess whether they can complete the tasks within the desired maximum time. This final pilot study will assist us in refining the control strategy and training sessions for future investigations, specifically targeting the SCI population.

2. Method

Control Algorithm

Let us consider a human model and an assistive robot (e.g., prosthesis, avatar, supernumerary limb) as part of a unified kinematic chain, with n h human joints and n r robotic joints. The user’s intent is defined in terms of reaching a desired goal x ¯ e with the robotic end-effector x e , whose pose depends on the joint configuration q = q h q r T , where q h and q r represent the human and robotic joint variables, respectively. The reaching error is defined as
e e = x ¯ e x e .
Humans often attempt to reach the goal without leveraging the assistance provided by the robotic device. This results in unnatural body movements, which can lead to fatigue for the user [18]. To mitigate this, building on the work presented in Legrand et al. [19], the study in Feder et al. [17] introduces a reference frame, x c , attached to a selected body segment (e.g., the shoulder). The target pose of this frame, when the user’s body is in a relaxed configuration, is denoted as x ¯ c , which makes it possible to evaluate the relative displacement of user from their relaxed configuration. The displacement error, expressed in terms of both translation and orientation, is defined as
e c = x ¯ c x c .
As outlined in [17], the human and robotic Jacobian matrices for the reaching task are denoted as J h e = J h e ( q ) = Q e q h and J r e = J r e ( q ) = Q e q r , respectively. Similarly, for the body motions, we have J h c = J h c ( q ) = Q c q h and J r c = J r c ( q ) = Q c q r . The overall system kinematics can then be expressed as
e ˙ e e ˙ c = x ¯ ˙ e x ¯ ˙ c J h e ( q ) J r e ( q ) J h c ( q ) J r c ( q ) q ˙ h q ˙ r .
The control objective is to compute q ˙ r such that e e asymptotically approaches zero. This is particularly challenging because the desired target x ¯ e , and consequently e e , is unknown to the robotic controller. To address this, the solution proposed in [17] integrates both the human and robotic components into a unified dynamic system with state x = e e e c T , control input u = q ˙ r , and measurable output y = e c . This results in the following time-variant dynamic system:
x ˙ ( t ) = A x ( t ) + B u ( t ) y ( t ) = C x ( t ) + D u ( t ) .
The solution proposed in [17] involves implementing an asymptotic observer to estimate e e . This estimate is then used in a linear quadratic Gaussian (LQG) regulator, where the gain matrix K is iteratively recomputed during the motion. This control framework is adaptable to a variety of human–robot systems. Indeed, this approach translates user-performed residual motions into the necessary robotic commands for reaching targets. In this way, even slight user motions are sufficient to activate the controller. Once the user observes the robot beginning to move toward the desired target, they can return to their resting position. As highlighted in [17], residual motions are significantly more pronounced when the control is not activated—leaving the user fully responsible for the reaching task—compared to when the control is active, with the robotic device assisting the user in completing the task. This system effectively leverages the user’s residual motions without actively amplifying them, as excessive residual motion could lead to pain and poor posture over time. Unlike traditional joint-to-joint mapping approaches, which require significant mental effort from the user to actively control each robotic joint individually, this approach allows the user to command multiple robotic degrees of freedom (DoF) simultaneously by providing only a slight motion intention.
In this study, we focus on the application of this control to disconnected human–robot systems. In such scenarios, users immerse into the remote environment of the robotic avatar, moving their bodies and controlling the robotic end-effector as if they were physically embodied in the avatar. For these cases, J h e in (1) is zero, as human joints do not actively participate in the reaching task, unlike in connected human–robot systems. Additionally, J r c is also null because robotic joints do not influence the pose of x c . In [17], the frame x c is placed along the user’s right shoulder. Its pose, as well as all other human joint values, can be directly measured using inertial measurement unit (IMU) sensors placed along the user’s upper body. For this purpose, XSens MVN sensors can be employed, providing the pose of all joints in the user’s biomechanical model, denoted as q h . In particular, the key sensors for this control strategy are those placed on the pelvis, trunk, and shoulder. During the control loop, the error e c can be directly measured as the difference between the pose of the right shoulder relative to the human pelvis reference frame, from the starting time, t = 0   s , to the elapsed time. This formulation incorporates all residual motions of the user’s upper body up to the shoulder within this term. Using the joint angles and body measurements, the Jacobian matrices can also be computed at each time step. The results presented in [17] demonstrate that, with this experimental setup, the user can incline and rotate their trunk and right shoulder to control virtual prosthetic arms with 3DoFs, 4DoFs, and 7DoFs, attached to their digital twin, as well as a physical robot avatar system, to reach the target. The study highlights that it is possible to estimate the reaching error, and consequently the goal pose, based on the user’s motion intentions. This enables robotic devices to reach the intended target without requiring the user to explicitly specify it. This approach allows the user to focus on the desired goal rather than directly controlling each robotic joint individually. Since the user plays a critical role in guiding control, the framework’s effectiveness largely depends on their residual motion capabilities. This necessitates tailoring the control system to the user, by fine-tuning the weight matrices in the controller to match task requirements and the user’s motion abilities. Another key factor is the choice of the frame x c . The placement of this frame can be adjusted to optimize the control strategy, aligning it more closely with the user’s individual motion capabilities. While the control approach presented in [17] demonstrates promising adaptability across various systems and user conditions, it lacks validation through multi-subject tests to assess its usability and feasibility comprehensively. In this study, individuals with spinal cord injuries are the target population, focusing on their residual movements capabilities, that can affect both the weight matrices tuning and the placement of the x c frame. Our goal is to adapt the control algorithm for robotic avatar systems to meet the specific needs of this population. We firstly aim at performing unimpaired multi-subjects tests, to evaluate the system’s usability, accuracy, and performance in navigation and reaching tasks before and after tailored training sessions. Secondly, a pilot study with two SCI subjects is described, to test the training experimental protocol with them and to effectively verify whether they were able to perform the proposed tasks.

3. Experiments

To evaluate the effectiveness and functionality of the previously introduced control algorithm, we test its application in a humanoid robotic avatar scenario, where the users control the avatar to reach a predefined pose, x ¯ e , within a virtual environment. This section outlines the experimental protocol, detailing the setup, involved subjects, and conducted experiments.

3.1. Experimental Setup

We performed all experiments in a virtual environment developed using the Robot Operating System (ROS) Noetic framework, integrated with the Gazebo simulation environment. The experimental setup relies on two main technologies: MVN XSens 2021.2 Inertial Measurement Units (IMUs) (https://www.movella.com/products/motion-capture/xsens-mvn-awinda, accessed on 9 February 2025) and the two-wheeled, auto-balancing humanoid robot Alter-Ego (https://softbots.iit.it/service-robots, accessed on 9 February 2025). The robot’s kinematics is used to define the matrix J r e , representing the Jacobian matrix of the entire robotic chain from the base to the right end-effector. This matrix accounts for the dynamics of the wheels and the five right arm joints.
The experiments focus on using upper-body movements as control inputs, tracked via nine XSens IMU sensors. Users look at the robotic avatar on a screen and adopt its perspective, treating the robotic avatar as an extension of their own body (see Figure 1).

3.2. Tailoring the System to the SCI Context

The primary objective is to determine whether the robotic joint controller enables users to achieve the target x e x ¯ e , while minimizing their own motions, i.e., bringing x c x ¯ c . In terms of error coordinates, this corresponds to stabilizing e e and e c at zero. This study aims to assess how intuitively users interact with the system, focusing on how quickly and effectively they learn to control it with minimal prior training. Key performance indicators include target accuracy, task completion time, and success rates.
Given the focus on the SCI context, we made several changes to the control algorithm. These include reducing sensor requirements and introducing a dead-band zone around the target posture to mitigate involuntary reflex motions, a common challenge for SCI subjects, thereby enhancing controller stability. Additionally, accurately fine-tuning the weight matrices allows us to adjust the control system’s response to the user’s motions. This helps prevent the system from reacting to involuntary movements, such as muscle spasms, which the SCI subject may experience. By appropriately adjusting the gains, we can slow down the control system’s response, ensuring that brief, rapid involuntary motions, like spasms, do not cause significant changes in the robot’s movements. Instead, the control algorithm is more effectively triggered by sustained, intentional motions. The same tuning is applied for all 12 unimpaired subjects and the two SCI participants. Additionally, as many SCI subjects retain partial control of the upper arm, we aimed to maximize the use of available degrees of freedom DoFs. Unlike Feder et al. [17], where the frame x c was aligned with the user’s shoulder, we opted to shift it to the next Inertia Measurement Units IMU sensor, placed on the right upper arm. Other sensors along the arm (e.g., wrist and hand) are required for the XSens system but do not actively contribute to the control algorithm. Additionally, extensive empirical tuning of the control gains was required to optimize the algorithm’s response to motion requests.

3.3. Experimental Protocol

The experimental protocol, summarized in Figure 2, consists of multiple phases and lasts up to 90 min. Although real-world experiments can be performed using the physical Alter-Ego humanoid robot, we conducted this study in a simulated environment with the robot’s virtual twin. This choice ensures a safer, more cost-effective testing process, accelerates development, and provides versatility for various applications, as discussed in Augenstein et al. [20]. Furthermore, the simulation code can be directly deployed on the real robot without modifications.
Too meet SCI subjects’ needs, we conducted the experiments for all subjects (included the calibration phase) in a seated position.
The protocol begins with sensor placement and calibration, followed by a one-minute free exploration (FE). During this phase, participants freely control the robot in the virtual environment using upper-body movements, learning that the system allows simultaneous control of the robot’s base and right arm. The study then advances to a test phase with two assessments, referred to as Test 1 and Test 2, focusing on navigation and reaching tasks.
  • Test 1 (Figure 3a): Participants are given 5 min to guide the robot to three targets. The first two targets are on the floor, requiring navigation using the robot’s base, while the third target is accessed using the robot’s end-effector. The optimal strategy involves aligning the robot’s base with the floor targets and using the end-effector to touch the center of the final target.
  • Test 2 (Figure 3b): This more complex scenario evaluates the participants’ ability to interact with everyday objects. Here, they must use the robot’s end-effector to reach a red glass placed on a table.
As evidenced by the metrics discussed in the previous section, Test 1 focuses on providing quantitative measures for system analysis, whereas Test 2 offers a comprehensive qualitative overview.
After the initial tests, participants proceed to a training session designed to teach control strategies through five progressive sub-phases (Figure 4), promoting a step-by-step learning.
The first sub-phase introduces control of the robot’s base rotation around the vertical Z-axis using the participant’s right arm. By rotating their arm along its longitudinal axis, participants learn to rotate the robot base clockwise (−90°, right) and counterclockwise (+90°, left), as shown in Figure 5a. In the figure, we represent with the red arrow the residual motions performed by the user, while the yellow arrow indicates the corresponding robotic motions. As observed, at the end of each training sub-phase, once user is satisfied with the robot’s achieved position, he returns to his target posture, bringing the robotic joint velocities to zero.
The second sub-phase focuses on controlling the robot’s forward along the X-axis. As depicted in Figure 5b, participants move their right upper arm forward to guide the robot along a linear trajectory.
The third sub-phase introduces control of the robot’s end-effector. By moving their right upper arm along the Z-axis and Y-axis, participants can raise the robot’s arm, as shown in Figure 5c.
The fourth step combines the strategies learned in the second and third phases. Participants are required to move the robot forward, return to the initial N-pose and then raise the robot’s end-effector. The final training phase integrates all strategies from the first three steps. Participants move the robot forward while simultaneously turning it to the right, then return to the N-pose to raise the robot’s arm. All the training steps conclude with participants resetting to their initial position.
After training, participants repeat the same test sessions (Test 1 and Test 2) to evaluate the training’s effectiveness.
Finally, participants complete three questionnaires for qualitative feedback. The National Aeronautics and Space Administration Task Load Index (NASA TLX) [21] questionnaire measures perceived workload, the System Usability Scale (SUS) [22] evaluates usability and intuitiveness, and the Negative Attitude toward Robots Scale (NARS) [23] gathers insights on participants’ overall experience with robots.

3.4. Subjects

This study was conducted in collaboration with Spinal Cord Unit, Santa Corona Hospital of Pietra Ligure (Savona, Italy). This study was approved by the Institutional Review Board (code CE DIBRIS protocol-N. 2022/52 approved on 22/09/2022) and it conformed to the ethical standards of the 1964 Declaration of Helsinki. Each subject provided written informed consent to participate in the study and to publish individual data. The study involved 12 unimpaired subjects (age: 26.9 ± 2.6 y.o., 3 females) with the only exclusion criterion being the presence of sensorimotor, neurological, or psychiatric impairments. All participants were naive to the control algorithm to avoid biases from prior knowledge. To evaluate the potential application of the control algorithm in assistive devices, a pilot test was conducted with two individuals with spinal cord injuries, providing feedback from potential end-users. The SCI subjects were classified based on injury level (cervical, thoracic, lumbar, sacral) and completeness (complete or incomplete). The American Spinal Injury Association (ASIA) offers a standardized international classification system [24], which grades the severity of the injury from A (complete) to E (normal). The first participant (SCI 1) was a 35-years-old tetraplegic male (C7 lesion, ASIA B), and the second (SCI 2) was a 34-years-old paraplegic male (L1 lesion, ASIA C). Both subjects followed the same experimental protocol as the unimpaired participants. The choice of including the tetraplegic subject is motivated by the potential of this control system to exploit the minimal residual upper-body motions for operating the external device, which can significantly assist in performing ADLs. The paraplegic subject also serves as a valuable participant in testing the new control algorithm, as individuals with paraplegia often face challenges related to fatigue when performing continuous activities requiring extensive movements. Using this control system can help reduce the effort and fatigue associated with such tasks.

3.5. Metrics and Statistical Analysis

As previously described, in Test 1 the robot is required to reach three targets, providing data for calculating three metrics to assess participants’ performance.
  • Distance Accuracy, A: the distance between the center of each of the three targets ( i = 1 , 2 , 3 ), and nearest point on the robot, computed as
    A i = 1 d m i n , i D m a x , i ,
    where d m i n , i is the minimum distance between the robot and the i th target center and D m a x , i is the initial distance between the nearest point of the robot and the i th target center when d m i n , i is achieved. For each subject, the final A score, normalized to the range [ 0 , 1 ] , is computed as
    A = A 1 + A 2 + A 3 3
  • Time, t ¯ : the normalized time required to reach each target, defined as
    t ¯ i = 1 t i T m a x ,
    where t i is the time elapsed from the test start to achieve d m i n , i , and T m a x = 300   s is the maximum allowable time given to the participant to complete the test. The final normalized time, t ¯ , is computed as
    t ¯ = t ¯ 1 + t ¯ 2 + t ¯ 3 3
  • Touch Accuracy, A t : the distance between the robot’s base center and the target center for the first two targets and the normalized distance between the robot’s end-effector and the target center for the third target. It is calculated as
    A t , i = 1 d c , i D c , i ,
    where d c , i is the distance from the robot’s base center to the target centers, for i = 1 , 2 , and from the end-effector to the target center for i = 3 . The denominator, D c , i , differs based on the target:
    For i = 1 , 2 , D c , i is r + l 2 , where r is the target radius and l is the robot base length.
    For i = 3 , D c , i corresponds to the offset from the robotic hand’s palm center to the fingertip. The target thickness is considered negligible.
    The final A t score, normalized to [ 0 , 1 ] , is computed as
    A t = A t , 1 + A t , 2 + A t , 3 3 .
    For the targets on x , y plane ( i = 1 , 2 ), touch accuracy is categorized as follows:
    Maximum accuracy ( A t = 1 ): when the robot base center is within the target radius.
    Medium accuracy ( 0 < A t < 1 ): when another point on the robot base, but not its center, is within the target radius.
    Low accuracy ( A t = 0 ): when the target is not reached.
The sign test, a nonparametric statistical test, was conducted in MATLAB (MathWorks, Natick, MA, USA) to evaluate differences in performance metrics between the pre-training and post-training phases. Specifically, the analysis focused on the results from Test 1, examining whether performance improvements across all three metrics were statistically significant after the training sessions.

4. Results

This section is divided into four subsections: the first presents the statistical analysis calculated based on the data of the 12 unimpaired subjects in Test 1, in the second we focus on the success rates of the 12 unimpaired subjects for both tests (Test 1 and Test 2), then in the third we focus on users’ feedback by looking at the questionnaires, and at the end the last subsection is related to kinematics and algorithm validation.

4.1. Statistical Analysis

Here, we report the statistical analysis related to the n = 12 unimpaired subjects. We provide global graphs that also include the indices for the two SCI subjects; however, they were excluded from the statistical analysis due to their limited sample size and the differences in their injury levels, which prevent meaningful statistical comparisons. For clarity, throughout the analysis, their data are presented for visual inspection only: the tetraplegic subject is identified by the red dot, and the paraplegic subject by a green one.
Figure 6a–c show the box-plots for the three metrics before and after training. Distance accuracy significantly increased ( p = 0.006 ), suggesting that participants became more precise in navigating the robot toward the targets. Similarly, touch accuracy showed a significant improvement ( p = 0.002 ), indicating enhanced ability to align the robot base or end-effector with the target centers after training. In general, an enhancement in the successful touches across subjects can be observed, highlighting a tendency to improve after the training session. However, the normalized time metric did not exhibit a statistically significant change ( p = 0.774 ), suggesting that, while participants became more accurate in reaching targets, the time efficiency did not notably improve, potentially due to a cautious approach in achieving higher precision. It is possible that users prioritized accuracy over speed, especially after becoming more familiar with the system.
For the third target, accuracy was analyzed using both the robot base and the end-effector minimum distances. Figure 7a represents the distribution of the accuracy between the base and the third target. Figure 7b shows the distributions of the accuracy in terms of end-effector distance from the third target. After training, one outlier is identified, which corresponds to subject S05. The outlier was identified as the data point falling outside the lower whisker of the box-plot, meaning it is more than 1.5 times the interquartile range below the lower quartile. In our test, this indicates that the minimum distance between the end-effector and the third target was much larger than that achieved by other subjects, suggesting poorer performance in controlling the end-effector. This results in a null success rate for this subject in reaching the third target, as further clarified in the next subsection. However, since the overall performance of this outlier still shows improvement before and after training—except for the end-effector accuracy metric (Figure 7b)—we decided to include this subject in the subsequent analyses.
The increased accuracy for the third target when using the robot base ( p = 0.006 ) and the end-effector ( p = 0.039 ) confirms that participants improved their control and spatial awareness, even in more complex tasks. For the two SCI participants (red dot for SCI 1 and green dot for SCI 2), we observe that SCI 2 performed worse than SCI 1. This can be attributed to the fact that the control system was primarily tailored to a tetraplegic injury level. Consequently, it was more customized for SCI 1, allowing for a better response to the smaller input signals typically associated with tetraplegic subjects. The results for SCI 1 are favorable, frequently showing performance above 50% and notable improvements between the pre-training and post-training phases. In contrast, SCI 2 often demonstrates metrics below 50%, which can be explained by the system’s design being more suited to tetraplegic subjects. Despite this, the system remains promising, as one of the primary objectives is to assess improvements between pre-training and post-training phases, which are also evident for SCI 2.

4.2. Success Rates

In addition to the statistical analysis conducted on Test 1, we also examined the success rates for both Test 1 and Test 2 only considering the 12 unimpaired subjects. Figure 8a shows the success rate histogram for Test 1, in which two bars for each target represent the percentage of successful touches before and after training, respectively. The success rate analysis further supports the positive impact of training. For instance, all 12 participants were able to reach the first target both before and after training. However, the second and third targets presented greater challenges initially, with pre-training success rates of 41.67% and 16.67%, respectively. Post-training, the success rate increased significantly to 100% for the second target and 75% for the third target, indicating an improvement in user proficiency over time. As introduced in the previous subsection with the identification of an outlier, it is evident from this analysis that the outlier is part of the 25% who failed to reach the third target after training.
For Test 2, we evaluated the success rate of the 12 unimpaired subjects in reaching the red glass on the table. As depicted in Figure 8b, the percentage of successful touches improved from 42% to 83%, further emphasizing the effectiveness of the training sessions. For what concerns the two SCI participants, due to the limited sample size, we do not report any statistical analysis, but we can highlight that, in Test 1, SCI 1 successfully reached all three targets both before and after training. On the other hand, SCI 2 successfully reached only the first target both before and after training, while the second and third targets were never reached. For Test 2, SCI 1 failed to reach the glass both before and after training, whereas SCI 2 did not reach the glass before training but successfully reached it after training.

4.3. Questionnaires

Results from the NASA TLX survey for the 12 unimpaired subjects (Figure 9a) can be summarized as follows: mental effort (median: 12, range: 3–17), physical fatigue (median: 6.5, range: 1–12), temporal demand (median: 10, range: 1–14), effort (median: 13, range: 2–16), frustration (median: 10, range: 4–13), and performance (median: 15.5, range: 7–20). In Figure 9b is visualized a comparison between the median values for each questions of the unimpaired subjects and the answers given by the two SCI participants, identified as before by the red and green color, respectively.
From SUS, a usability score of 53.8636 out of 100 is obtained (STD: 16.0999, min: 25, max: 75). In Figure 10a are reported the maximum, minimum, and median values for each question of the questionnaire, considering all the 12 unimpaired subjects. In Figure 10b, the median values of the unimpaired subjects are visualized together with the answers of the two individuals with spinal cord injuries.
NARS questionnaires results, divided into the three sub-scales, are the following:
  • Sub-scale S1: Attitude toward interactions with robots has a median value of 1 (mean: 1.3333, STD: 0.5164).
  • Sub-scale S2: Attitude toward social influence of robots has a median value of 2 (mean: 2.2, STD: 0.4472).
  • Sub-scale S3: Attitude toward emotional aspects in robot interaction has a median of 3 (mean: 2.6667, STD: 1.5275).
Results from NASA TLX and SUS questionnaires provide important insights into the cognitive and physical demands of the system. Since NASA TLX survey reveals a high level of mental effort (median: 12) and effort (median: 13), with moderate levels of physical fatigue (median: 6.5) and temporal demand (median: 10), it can be said that the system requires a significant cognitive engagement, while the physical strain remains relatively manageable for most participants. Notably, frustration levels are moderate (median: 10) and this may be due to initial challenges in controlling the robot, as indicated by participants. The self-assessment score is very high for 11 out of 12 subjects (median: 15.5). Improvements in ease of use and efficiency are recommended. The System Usability Scale (SUS) score of 53.86 out of 100 indicates that the system’s usability is below average, suggesting area for improvements. Scores in the range of 50–60 are often classified as “marginal”, implying that the system is acceptable for some users but optimal. This finding underscores the need for further refinement to enhance ease of use, reducing cognitive load and improving efficiency.
Analyzing the results of the two SCI subjects (Figure 9b and Figure 10b) and comparing them with those of unimpaired subjects (Figure 9a and Figure 10a), both NASA TLX and SUS responses indicate a generally lower perceived workload, such as greater usability and intuitiveness of the system. This provides valuable insights for a future application of the system in the field of spinal cord injuries.
The NARS questionnaire results reveal a generally positive attitude toward robot interactions. Participants expressed a very favorable perception in terms of general interaction (S1, median: 1) and social influence (S2, median: 2). However, attitudes toward emotional aspects of interacting with robots (S3) were more neutral and varied, as indicated by a higher standard deviation.

4.4. Kinematics and Algorithm Validation

Important insights can also be drawn from outcomes related to kinematics and the functioning of the algorithm. As previously mentioned, the goal of the control algorithm presented in [17] is to generate commands for the robotic joints ( q ˙ r ) to reach a desired target, x ¯ e , known only to the human.
In this section, we briefly describe the validation of the algorithm. Given the ultimate focus on SCI subjects, the tetraplegic participant, SCI 1, was selected as the primary subject for this analysis.
For Test 1, since the position of x ¯ e is predefined in the virtual environment for the three targets (denoted as x ¯ e 1 , x ¯ e 2 , and x ¯ e 3 ), a straightforward comparison with the actual positions can be made.
Figure 11a shows the position achieved by the robot’s base in the x and y coordinates before training (blue) and after training (magenta), compared to the desired positions for the first two targets, x ¯ e 1 and x ¯ e 2 . Figure 11b illustrates the comparison between the actual and desired end-effector positions for the third target, x ¯ e 3 , along all three coordinates, both pre- and post-training. As observed, the participant successfully reached all targets even during the pre-training phase. However, the time required to achieve these targets was significantly reduced in the post-training phase.
To evaluate the effectiveness of the training sessions, we analyze the residual motions performed by the subject, SCI 1. Figure 12 illustrates the pose of the user’s frame, x c , positioned on the right upper arm, in terms of both position and orientation, before and after the training sessions for the entire Test 1. As shown, the movements performed post-training are significantly smoother and less pronounced compared to those observed during the pre-training session. By specifically focusing on the translational motion, we observe that, after training, the movement remains smooth, slight, and consistent throughout the task, without sudden changes that could indicate a muscle spasm or an involuntary motion by the user at any given moment. However, as highlighted in the discussion section, further improvements are needed to enhance the control’s ability to distinguish between voluntary and involuntary movements. Regarding the orientation, more pronounced fluctuations can be observed after training. This is due to variations in the orientation component of the user’s frame, x c , particularly when the last of the three targets is reached. Specifically, at this stage, the user must raise their upper arm, as shown in the third sub-phase of the training sessions in Figure 5c. This motion results in fluctuations in the orientation range.

5. Discussion

All 12 unimpaired subjects and the two subjects with SCI completed the entire experimental protocol, demonstrating an overall robustness and accessibility of the system. The primary goal of the data analysis was to assess whether a general improvement could be observed between pre-training and post-training tests to evaluate the effectiveness of the training and the usability of the system. Interestingly, SCI participants achieved performances comparable to those of unimpaired subjects, especially in terms of time, touch accuracy, task completion, and success rate, highlighting the potential of this robotic system to support users with different levels of motor function. Their performance and success rates were evaluated individually, with the results analyzed separately. This approach allows us to conclude that, for these two subjects, results appear promising, as improvements between the pre-training and the post-training phases are evident, even though the performance of SCI 2 seems poor if compared to the others.
The three main performance metrics—Distance Accuracy, A, Time, t ¯ and Touch Accuracy, A t —provided valuable insights into the impact of training on users’ interactions with the system.
Statistical analysis revealed that participants became significantly more precise in navigating the robot toward the targets following the training sessions. Similarly, improvements were observed in the third metric, touch accuracy. However, the time metric did not show a statistically significant change, indicating that participants prioritized achieving higher precision over reducing the time to reach the targets.
Additionally, the success rates achieved during both tests demonstrated marked improvements post-training. Nevertheless, the fact that some participants did not reach the final target can be attributed to the limited maximum time available for completing the task.
This variability suggests that, while some participants were comfortable engaging with robots on an emotional level, others were less so, reflecting diverse attitudes toward the social and affective dimensions of human–robot interaction. Overall, participants had a positive attitude toward robots in the first two sub-scales (S1 and S2), with slight decrease in the second. The third sub-scale (S3) revealed more distributed opinions. Regarding the algorithm validation and system kinematics, the results confirm the overall correct functioning of the control algorithm, as it effectively translates the user’s residual motions into corresponding robot movements. This process generally leads to the convergence of both errors, e e and e c , toward zero.
To illustrate this, our analysis focused on the tetraplegic subject, who successfully reached all targets both before and after the training sessions. The results show a significant reduction in the time required to complete the tasks post-training, demonstrating the effectiveness of the training steps for the subject.
Additionally, the analysis of residual motions highlights the subject’s learning process. During the pre-training phase, the subject tended to explore the task by performing more pronounced movements. However, after learning the strategy, post-training movements became minimal. The subject relied primarily on small displacements of the right arm, reserving arm orientation adjustments for the final phase of the task, specifically when the robot’s arm needed to be raised.
Nevertheless, the results from the pilot study involving SCI subjects highlight the potential of this control strategy. Although additional SCI participants are needed to fully assess the learning process during training sessions, this pilot study suggests its promising effectiveness. Furthermore, involving more SCI subjects will require a more detailed investigation to enable the system to better discriminate between voluntary and involuntary motions. The two SCI participants in this study did not exhibit significant involuntary motions, and the fine-tuning of the weight matrices we adopted was sufficient for the control to function correctly, preventing the system from responding excessively to potential muscle spasms. However, other subjects may present more pronounced involuntary motions or long-lasting motor unit activities, for which additional corrective measures will need to be implemented in the control strategy. Based on the questionnaires, the two SCI subjects also confirmed the system’s usability. Furthermore, after training, we observed how slight and consistent motions by the users enabled the robot to reach the desired goal with the avatar, demonstrating how residual motions can be effectively used to control the BoMI system. A challenge for SCI users in controlling the BoMI system is the variability in injury levels. Even among individuals with the same injury level, the range of possible motions can differ significantly. Although there is margin of improvement, this control strategy shows promise in adapting to different types of users, thanks to its flexibility in placing the frame x c based on the specific residual motion available to each individual.

6. Conclusions

The findings of this study highlight several promising applications for the proposed system. The results demonstrate that the new control algorithm effectively leverages residual upper-body motions to control an external device with a higher number of degrees of freedom (DoFs). Moreover, the algorithm accurately estimates the target to be reached, as evidenced by the convergence of both errors, e e and e c . Post-training analyses also reveal that residual motions become less pronounced, indicating that subjects learn to perform tasks more efficiently over time. These preliminary results suggest that this control strategy can assist the SCI population in operating assistive devices, such as a humanoid avatar, by leveraging their residual motion capabilities. For instance, a slight motion of the upper arm can be sufficient to command a robot to move toward an object the user intends to grasp.
The learning process observed is encouraging, with indices computed after the training sessions often outperforming those obtained beforehand. While the perceived workload is generally moderate, there is room for improvement in system usability.
Future studies could explore adapting the control framework by positioning the user’s reference frame, x c , on alternative body parts, such as the shoulder, depending on the subject’s characteristics and level of impairment. This approach could expand the applicability of the system to a broader range of individuals with varying degrees of injury.
Finally, experiments with the real robot Alter-Ego and additional participants are necessary to validate the system further and strengthen statistical robustness. Although the virtual environment has provided the opportunity to test the experimental protocol across different scenarios and setups, we aim to test the same protocol with a real robot. Since the control algorithm and the robot’s kinematics remain unchanged, a similar success rate and performance to that observed in the virtual case can be expected. However, a different perception by the subjects, particularly in terms of usability and questionnaire responses, can be anticipated. From a perceptual standpoint, a real robot physically reaching the desired object, as opposed to a simulated one, may enhance the perceived potential of this system for real-world applications within the SCI context.
To sum up, the results confirm the effectiveness of the proposed approach and suggest that we are on the right path to enhancing assistive technologies for individuals with SCI.

Author Contributions

Conceptualization M.C. and A.B.; methodology G.G. and M.G.C.; software M.F. and A.F.; validation A.F. and M.C.; formal analysis A.F.; investigation A.F.; resources A.M. and A.B.; data curation A.F.; writing—original draft preparation A.F. and M.F.; writing—review and editing A.F., M.F., M.G.C. and M.C.; visualization A.F. and M.F.; supervision A.B. and M.C.; project administration M.C. and M.G.C.; funding acquisition M.G.C., A.B. and M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was carried out within the framework of the project “RAISE-Robotics and AI for Socio-economic Empowerment” and has been supported by European Union-NextGenerationEU. Funded by the European Union-NextGenerationEU. However, the views and opinions expressed are those of the authors alone and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. This work was supported by the Italian Ministry of Research, under the complementary actions to the NRRP “Fit4MedRob-Fit for Medical Robotics” Grant (# PNC0000007).

Institutional Review Board Statement

This study was approved by the Institutional Review Board (code CE DIBRIS protocol-N. 2022/52 approved on 22 September 2022) and it conformed to the ethical standards of the 1964 Declaration of Helsinki.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The authors confirm that the data supporting the findings of this study are available within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Krebs, H.; Volpe, B. Chapter 23—Rehabilitation Robotics. In Handbook of Clinical Neurology; Neurological Rehabilitation; Barnes, M.P., Good, D.C., Eds.; Elsevier: Amsterdam, The Netherlands, 2013; Volume 110, pp. 283–294. [Google Scholar] [CrossRef]
  2. DeVivo, M.J. Epidemiology of traumatic spinal cord injury: Trends and future implications. Spinal Cord 2012, 50, 365–372. [Google Scholar] [CrossRef]
  3. Van der Loos, H.M.; Reinkensmeyer, D.J. Rehabilitation and Health Care Robotics. In Springer Handbook of Robotics; Siciliano, B., Khatib, O., Eds.; Springer: Berlin/Heidelberg, Germany, 2008; pp. 1223–1251. [Google Scholar] [CrossRef]
  4. Gopinath, D.; Jain, S.; Argall, B.D. Human-in-the-Loop Optimization of Shared Autonomy in Assistive Robotics. IEEE Robot. Autom. Lett. 2017, 2, 247–254. [Google Scholar] [CrossRef] [PubMed]
  5. Jain, S.; Farshchiansadegh, A.; Broad, A.; Abdollahi, F.; Mussa-Ivaldi, F.; Argall, B. Assistive robotic manipulation through shared autonomy and a Body-Machine Interface. In Proceedings of the 2015 IEEE International Conference on Rehabilitation Robotics (ICORR), Singapore, 11–14 August 2015; pp. 526–531. [Google Scholar] [CrossRef]
  6. Rizzoglio, F.; Sciandra, F.; Galofaro, E.; Losio, L.; Quinland, E.; Leoncini, C.; Massone, A.; Mussa-Ivaldi, F.A.; Casadio, M. A myoelectric computer interface for reducing abnormal muscle activations after spinal cord injury. In Proceedings of the 2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR), Toronto, ON, Canada, 24–28 June 2019; pp. 1049–1054. [Google Scholar]
  7. McFassel, G.; Hsieh, S.J.; Peng, B. Prototyping and evaluation of interactive and customized interface and control algorithms for robotic assistive devices using Kinect and infrared sensor. Int. J. Adv. Robot. Syst. 2018, 15. [Google Scholar] [CrossRef]
  8. Lee, J.M.; Gebrekristos, T.; Santis, D.D.; Nejati-Javaremi, M.; Gopinath, D.; Parikh, B.; Mussa-Ivaldi, F.A.; Argall, B.D. Learning to Control Complex Robots Using High-Dimensional Interfaces: Preliminary Insights. arXiv 2021. [Google Scholar] [CrossRef]
  9. Lenzi, T.; De Rossi, S.M.M.; Vitiello, N.; Carrozza, M.C. Intention-Based EMG Control for Powered Exoskeletons. IEEE Trans. Biomed. Eng. 2012, 59, 2180–2190. [Google Scholar] [CrossRef] [PubMed]
  10. Hochberg, L.R.; Bacher, D.; Jarosiewicz, B.; Masse, N.Y.; Simeral, J.D.; Vogel, J.; Haddadin, S.; Liu, J.; Cash, S.S.; van der Smagt, P.; et al. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 2012, 485, 372–375. [Google Scholar] [CrossRef] [PubMed]
  11. Kirchner, E.A.; Tabie, M.; Seeland, A. Multimodal movement prediction-towards an individual assistance of patients. PLoS ONE 2014, 9, e85060. [Google Scholar] [CrossRef]
  12. Tatinati, S.; Veluvolu, K.C.; Ang, W.T. Multistep Prediction of Physiological Tremor Based on Machine Learning for Robotics Assisted Microsurgery. IEEE Trans. Cybern. 2015, 45, 328–339. [Google Scholar] [CrossRef]
  13. Yokota, T.; Hirose, K.; Tanabe, H.; Tsukagoshi, H. Sleep-related periodic leg movements (nocturnal myoclonus) due to spinal cord lesion. J. Neurol. Sci. 1991, 104, 13–18. [Google Scholar] [CrossRef] [PubMed]
  14. Thomas, C.K.; Bakels, R.; Klein, C.S.; Zijdewind, I. Human spinal cord injury: Motor unit properties and behaviour. Acta Physiol. 2013, 210, 5–19. [Google Scholar] [CrossRef] [PubMed]
  15. Zollo, L.; Rossini, L.; Bravi, M.; Magrone, G.; Sterzi, S.; Guglielmelli, E. Quantitative evaluation of upper-limb motor control in robot-aided rehabilitation. Med. Biol. Eng. Comput. 2011, 49, 1131–1144. [Google Scholar] [CrossRef] [PubMed]
  16. Schultz, A.E.; Kuiken, T.A. Neural interfaces for control of upper limb prostheses: The state of the art and future possibilities. PM&R 2011, 3, 55–67. [Google Scholar]
  17. Feder, M.; Grioli, G.; Catalano, M.G.; Bicchi, A. A General Control Method for Human-Robot Integration. arXiv 2024. [Google Scholar] [CrossRef]
  18. Carey, S.; Highsmith, M.; Maitland, M.; Dubey, R. Compensatory movements of transradial prosthesis users during common tasks. Clin. Biomech. 2008, 23, 1128–1135. [Google Scholar] [CrossRef]
  19. Legrand, M.; Jarrasse, N.; Montalivet, E.; Richer, F.; Morel, G. Closing the Loop Between Body Compensations and Upper Limb Prosthetic Movements: A Feasibility Study. IEEE Trans. Med. Robot. Bionics 2020, 3, 230–240. [Google Scholar] [CrossRef]
  20. Augenstein, T.E.; Nagalla, D.; Mohacey, A.; Cubillos, L.H.; Lee, M.H.; Ranganathan, R.; Krishnan, C. A novel virtual robotic platform for controlling six degrees of freedom assistive devices with body-machine interfaces. Comput. Biol. Med. 2024, 178, 108778. [Google Scholar] [CrossRef] [PubMed]
  21. Hart, S. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In Human Mental Workload; Elsevier: Amsterdam, The Netherlands, 1988. [Google Scholar]
  22. Brooke, J. SUS: A quick and dirty usability scale. In Usability Evaluation in Industry; Redhatch Consulting Ltd.: Earley, UK, 1996. [Google Scholar]
  23. Syrdal, D.S.; Dautenhahn, K.; Koay, K.L.; Walters, M.L. The negative attitudes towards robots scale and reactions to robot behaviour in a live human-robot interaction study. In Proceedings of the 23rd Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB), Edinburgh, UK, 6–9 April 2009; pp. 109–115. [Google Scholar]
  24. Maynard, F.M., Jr.; Bracken, M.B.; Creasey, G.; Ditunno, J.F., Jr.; Donovan, W.H.; Ducker, T.B.; Garber, S.L.; Marino, R.J.; Stover, S.L.; Tator, C.H.; et al. International standards for neurological and functional classification of spinal cord injury. Spinal Cord 1997, 35, 266–274. [Google Scholar] [CrossRef]
Figure 1. Experimental setup with the subject wearing XSens sensors and looking at the screen ahead. In (a) the setup for the unimpaired subject, while in (b) the setup for the pilot study with SCI subjects.
Figure 1. Experimental setup with the subject wearing XSens sensors and looking at the screen ahead. In (a) the setup for the unimpaired subject, while in (b) the setup for the pilot study with SCI subjects.
Applsci 15 01792 g001
Figure 2. Schematic representation of the experimental protocol.
Figure 2. Schematic representation of the experimental protocol.
Applsci 15 01792 g002
Figure 3. Simulated virtual environments featuring the three targets for Test 1 (a) and the everyday objects for Test 2 (b).
Figure 3. Simulated virtual environments featuring the three targets for Test 1 (a) and the everyday objects for Test 2 (b).
Applsci 15 01792 g003
Figure 4. Schematic representation of the training sequence.
Figure 4. Schematic representation of the training sequence.
Applsci 15 01792 g004
Figure 5. Frames highlighting the training sessions. Human motions are shown alongside the corresponding robot movements across the three training sessions. In (a), the sequence of motions required to rotate the robot, in (b) the motions to move the robot forward, and in (c) the motions to raise the robotic arm.
Figure 5. Frames highlighting the training sessions. Human motions are shown alongside the corresponding robot movements across the three training sessions. In (a), the sequence of motions required to rotate the robot, in (b) the motions to move the robot forward, and in (c) the motions to raise the robotic arm.
Applsci 15 01792 g005
Figure 6. Box-plots of the three metrics, accuracy A (a), time t ¯ (b), and touch accuracy A t (c) before and after training calculated based on data of the 12 unimpaired subjects. The red line indicates the median values of the metrics across unimpaired subjects, while the dots represent the two SCI subjects.
Figure 6. Box-plots of the three metrics, accuracy A (a), time t ¯ (b), and touch accuracy A t (c) before and after training calculated based on data of the 12 unimpaired subjects. The red line indicates the median values of the metrics across unimpaired subjects, while the dots represent the two SCI subjects.
Applsci 15 01792 g006
Figure 7. Box-plots of the Accuracy, A, before and after training for the third target calculated based on data of the 12 unimpaired subjects. (a) Considers the distance between the target and the robot base, while (b) the distance between the target and the robot’s end-effector. The red line indicates the median values of the metrics across unimpaired subjects, while the dots represent the two SCI subjects excluded from the statistics.
Figure 7. Box-plots of the Accuracy, A, before and after training for the third target calculated based on data of the 12 unimpaired subjects. (a) Considers the distance between the target and the robot base, while (b) the distance between the target and the robot’s end-effector. The red line indicates the median values of the metrics across unimpaired subjects, while the dots represent the two SCI subjects excluded from the statistics.
Applsci 15 01792 g007
Figure 8. Success rates of target touches before (gray) and after (blue) training for Test 1 (a) and Test 2 (b), calculated based on the 12 unimpaired subjects. For Test 1, the success rate for the first target is always 100%, for the second it is 41.67% pre-training and 100% post-training, and for the third it is 16.67% pre-training and 75% post-training. For Test 2, the success rate improves from 41.67% pre-training to 83.33% post-training.
Figure 8. Success rates of target touches before (gray) and after (blue) training for Test 1 (a) and Test 2 (b), calculated based on the 12 unimpaired subjects. For Test 1, the success rate for the first target is always 100%, for the second it is 41.67% pre-training and 100% post-training, and for the third it is 16.67% pre-training and 75% post-training. For Test 2, the success rate improves from 41.67% pre-training to 83.33% post-training.
Applsci 15 01792 g008
Figure 9. NASA TLX results: in (a) questionnaires for the 12 unimpaired subjects showing maximum, minimum, and median values for each of the questions. In (b) questionnaires reporting the median value of the 12 unimpaired subjects and answers of SCI 1 and SCI 2.
Figure 9. NASA TLX results: in (a) questionnaires for the 12 unimpaired subjects showing maximum, minimum, and median values for each of the questions. In (b) questionnaires reporting the median value of the 12 unimpaired subjects and answers of SCI 1 and SCI 2.
Applsci 15 01792 g009
Figure 10. SUS results: in (a) questionnaires for the 12 unimpaired subjects showing maximum, minimum, and median values for each of the questions. In (b), questionnaires reporting the median value of the 12 unimpaired subjects and answers of SCI 1 and SCI 2.
Figure 10. SUS results: in (a) questionnaires for the 12 unimpaired subjects showing maximum, minimum, and median values for each of the questions. In (b), questionnaires reporting the median value of the 12 unimpaired subjects and answers of SCI 1 and SCI 2.
Applsci 15 01792 g010
Figure 11. Robot coordinates before (blue) and after (magenta) the training sessions for the SCI subject during Test 1 vs. the actual targets’ positions.
Figure 11. Robot coordinates before (blue) and after (magenta) the training sessions for the SCI subject during Test 1 vs. the actual targets’ positions.
Applsci 15 01792 g011
Figure 12. Pose of the user’s right upper arm, x c , during Test 1 before (blue) and after (magenta) training.
Figure 12. Pose of the user’s right upper arm, x c , during Test 1 before (blue) and after (magenta) training.
Applsci 15 01792 g012
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Freccero, A.; Feder, M.; Grioli, G.; Catalano, M.G.; Massone, A.; Bicchi, A.; Casadio, M. A Body–Machine Interface for Assistive Robot Control in Spinal Cord Injury: System Description and Preliminary Tests. Appl. Sci. 2025, 15, 1792. https://doi.org/10.3390/app15041792

AMA Style

Freccero A, Feder M, Grioli G, Catalano MG, Massone A, Bicchi A, Casadio M. A Body–Machine Interface for Assistive Robot Control in Spinal Cord Injury: System Description and Preliminary Tests. Applied Sciences. 2025; 15(4):1792. https://doi.org/10.3390/app15041792

Chicago/Turabian Style

Freccero, Aurora, Maddalena Feder, Giorgio Grioli, Manuel Giuseppe Catalano, Antonino Massone, Antonio Bicchi, and Maura Casadio. 2025. "A Body–Machine Interface for Assistive Robot Control in Spinal Cord Injury: System Description and Preliminary Tests" Applied Sciences 15, no. 4: 1792. https://doi.org/10.3390/app15041792

APA Style

Freccero, A., Feder, M., Grioli, G., Catalano, M. G., Massone, A., Bicchi, A., & Casadio, M. (2025). A Body–Machine Interface for Assistive Robot Control in Spinal Cord Injury: System Description and Preliminary Tests. Applied Sciences, 15(4), 1792. https://doi.org/10.3390/app15041792

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop