Testing

In this the results obtained related to the validation and testing of the second prototype will be presented and discussed. A real ABS prototype has been hence 3D-printed to test the manufacturability of the kinematic architecture and assess the goodness of the optimization strategy for different users. The tests have been carried out at the Don Carlo Gnocchi Foundation Rehabilitation Center in Florence, Italy, making use of the MoCap system used also to evaluate the motion of the hand alone; the cameras configuration exploited to validate the second prototype is the same used during the motion capture of the hand.

The right index finger motion of the 13 subjects enrolled to validate the performance of the optimization procedure has been assessed through a MoCap analysis during three consecutive flexion/extension movements. Taking the aforementioned trajectories as reference, the proposed optimization process has been carried out over the 13 healthy subjects and the results are reported in Table 1.


**Table 1.** Length and width of the hand, maximum, average and standard deviation (Std Dev) of the error between the desired and the real trajectory for the index finger, and maximum, average and standard percentage error expressed with respect to the subjects' finger length.

The results reported in Table 1 allowed to verify and check the reliability of the optimization algorithm when different hand exoskeletons have to be adapted to hands characterized by different dimensions. The average of the maximum calculated errors (this measurement provides a quantitative value of the difference—maximum distance—between the trajectory tracked by the user's finger and the one generated by the finger mechanism of hand exoskeleton) was 3.16 mm (standard deviation 1.47 mm). That proves the goodness of the implemented optimization strategy in adapting the a-priori defined one-DOF kinematic chain to the user's anatomy.

These new tests have shown promising results with regard to the portability of the device. With the addition of the passive DOF on the ab/adduction movement of the MCP joint, the exoskeleton is considerably more compliant with the hand movements and can be worn longer without feeling as uncomfortable as before. Moreover, the automatic control over the ROM reliably prevents the exoskeleton to move the fingers towards unnatural or painful positions, liberating the user from the fear of hurting himself if not enough attention was paid. The result achieved with new optimization strategy of the geometry of the finger mechanism can be considered even more important though. In fact, this new process allowed the mechanism to be quickly adapted to different users, paving the way to future experimental campaigns on multiple patients. However, the buttons-triggered actuation still remained an open up point to be further investigated.

#### **5. Third Prototype: Intuitive Control**

This current version of the exoskeleton (Figure 6) is represented by a fully portable, wearable and highly customizable device that can be used both as an assistive hand exoskeleton and as a rehabilitative one. Both mechatronic design and control system are developed basing on the patients needs in order to satisfy users' daily requirements increasing their social interaction capabilities.

**Figure 6.** The figure shows, on the left, the final and current version of the exoskeleton prototype developed by the DIEF worn by a healthy subject and, on the right, the corresponding kinematic chain and CAD model. Colors and names (capital letters) of the components, and joints enumeration are reported as introduced in Section 2.

## *5.1. Mechanical Design*

The mechanics of this last exoskeleton has been revamped to achieve a more lightweight solution improving its wearability without influencing the obtained results in terms of accuracy while replicating hand gestures. The new system is actuated by a single servomotor and a specific cable driven transmission system has been developed to open all the four long fingers together at the same time. Different mechanisms velocities are obtained thanks to different pulleys diameters, which are calculated depending on users' fingers dimensions, and coupled with the same common shaft which is moved by the motor through a belt transmission. The mechanism kinematic architecture has been further modified by eliminating component D of Figure 1. Thickness of components C and B has thus been increased to bear the heavier load cycle these components are now subjected to during functioning.

#### *5.2. Actuation System and Control Strategy*

The first important difference with respect to the previous systems is that, as reported above, another motor has been removed adopting a single-motor actuation system. Even though the motor has not been changed, the exploitation of just one actuator has brought with it some advantages: the total weight of the system has been remarkably reduced and the control code has resulted in being computationally lighter, not having to manage the coordination between motors. Nevertheless, the main difference of this prototype lies in the triggering system. Tests conducted on the second version of the prototype have hence stressed the importance for the user of being able to use both hands independently, pushing the buttons-based triggering action to be replaced with something which could allow for an autonomous control of each hand. Following the most recent research trends in literature (e.g., [38,39]), a specific EMG-based control architecture has been developed letting the user the total control of the exoskeleton actuation without being forced to use the other hand (as it happened with the first and the second prototypes).

Accordingly to the guidelines of the research work, also the electronics of the system has been reduced to the minimum necessary. Two MyoWare Muscle Sensors (AT-04-001) by Advancer Technologies https://learn.sparkfun.com/tutorials/myoware-muscle-sensor-kit have been chosen for collecting EMG signals from specific forearm muscles, i.e., flexor digitorum superficialis and extensor digitorum superficialis. These sensors, small (20.8 × 52.3 mm) and low-powered, measure the

electrical activity of a muscle, outputting either raw EMG signals or enveloped EMG signals, which are amplified, rectified and integrated.

The proposed control strategy, presented in [40] and detailed in [41], can be split in two main parts, which are sequentially executed every 33 ms (i.e., at 30 Hz). The first one is in charge of classifying user's intentions from the muscular activity measurements, the second one instead manages the corresponding actuation of the system. Within the second part of the code, an outer control loop checks if the system overcomes a fixed ROM and an inner control loop, which is only active during hand closing, is meant to check if an object is grasped. by evaluating the closing velocity of the index finger. when it drops below a fixed threshold while the motor is still running, it is reasonable to think that the hand has encountered an object or an obstacle. Real-time information about the position and the velocity of the index finger is collected by means of the magnetic encoder mounted on the exoskeleton in correspondence of the MCP joint of the finger.

Since the human hand can perform lots of different gestures and the corresponding muscles are very close to each other, a precise classification of every user intention usually requires the use of workstations, which is definitely far away from the idea of cheapness and wearability this project is based on. Hand opening, hand closing and hand resting have then been considered as the only possible user's intentions to be classified, as they represent the basic hand motions for the ADLs.

The classification phase is carried out by means of an algorithm called "Point-in-Polygon algorithm". This algorithm works on the base of a ray-casting to the right. It takes as inputs the number of the polygon vertices, their coordinates and the coordinates of a test point; each iteration of the loop, the line (ray) drawn (cast) rightwards from the test point is checked against the polygon perimeter and the number of times this line intersect a different edge is counted; if the number of crosses is an odd number of times, then the point is inside, if an even number, the point is outside. This classifier is tuned during a preliminary training phase through a custom Qt Graphical User Interface (GUI) developed by the authors. It is a user-friendly tool which allows for collecting EMG signals and for displaying them on a 2D Cartesian plane, whose axis report respectively data from the first and the second sensor. Once the EMG data has been collected for all the three aforementioned gestures, it is possible to manually draw the geometric figures which delimit the data corresponding to the same gesture. The correct choice of the parameters of the polygons (e.g., vertices, shape, size) represent a crucial point of the classification phase which is supposed to be performed basing on patient's needs to improve the accuracy and the rejection of disturbs.
