Next Article in Journal
Special Issue on “Medical Imaging & Image Processing II”
Next Article in Special Issue
Task Engagement as Personalization Feedback for Socially-Assistive Robots and Cognitive Training
Previous Article in Journal
Effect of Post-Harvest Traditional Technologies on the Nutrient Content and Antioxidant Compounds of Defatted Flours from Ricinodendron heudelotti (Baill. Pierre ex Pax) Seed Kernels
Previous Article in Special Issue
Assistant without Master? Some Conceptual Implications of Assistive Robotics in Health Care
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tactile Myography: An Off-Line Assessment of Able-Bodied Subjects and One Upper-Limb Amputee

1
Institute of Robotics and Mechatronics, German Aerospace Center (DLR), 82234 Weßling, Germany
2
Center of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University, 33619 Bielefeld, Germany
3
Laboratory for Biomedical Microtechnology, Department of Microsystems Engineering, University of Freiburg, 79110 Freiburg, Germany
4
Machine Learning and Data Analytics Lab, Friedrich-Alexander University Erlangen-Nuernberg, 91058 Erlangen, Germany
*
Author to whom correspondence should be addressed.
Technologies 2018, 6(2), 38; https://doi.org/10.3390/technologies6020038
Submission received: 4 January 2018 / Revised: 12 March 2018 / Accepted: 21 March 2018 / Published: 23 March 2018
(This article belongs to the Special Issue Assistive Robotics)

Abstract

:
Human-machine interfaces to control prosthetic devices still suffer from scarce dexterity and low reliability; for this reason, the community of assistive robotics is exploring novel solutions to the problem of myocontrol. In this work, we present experimental results pointing in the direction that one such method, namely Tactile Myography (TMG), can improve the situation. In particular, we use a shape-conformable high-resolution tactile bracelet wrapped around the forearm/residual limb to discriminate several wrist and finger activations performed by able-bodied subjects and a trans-radial amputee. Several combinations of features/classifiers were tested to discriminate among the activations. The balanced accuracy obtained by the best classifier/feature combination was on average 89.15% (able-bodied subjects) and 88.72% (amputated subject); when considering wrist activations only, the results were on average 98.44% for the able-bodied subjects and 98.72% for the amputee. The results obtained from the amputee were comparable to those obtained by the able-bodied subjects. This suggests that TMG is a viable technique for myoprosthetic control, either as a replacement of or as a companion to traditional surface electromyography.

1. Introduction

Upper-limb amputations are a serious impediment in a world where the large majority of daily tasks are operated by hands. In spite of the remarkable body of research conducted in the scientific communities of assistive robotics, sensors, machine learning and biomedical engineering in the past couple of decades, we are still far from the ideal solution to provide a reliable form of control of upper-limb prostheses. According to comprehensive reviews [1,2,3], rejection rates reach disastrous levels for all kinds of upper-limb prosthetic devices (body-powered, self-powered and/or passive devices). Reported mean rejection rates are on average one third for pediatric and one fourth for adult patients, with about 1900 traumatic upper-limb amputations per year in Europe and maintaining an estimated total population of 94,000 upper-limb amputees [4]. At the same time, notice that one third of upper-limb amputees still does use a passive prosthesis [5] and that at least one recent study [6] makes a decisive point for body-powered devices against self-powered ones. For a somewhat contradictory survey, see, e.g., [7].
The main reasons for this impasse include the lack of dexterity and long reaction times of the prosthetic device and, most crucially, poor reliability of the Human-Machine Interface (HMI) used to control it [1,2,3,4,8]. In practice, the standard prosthetic user of a self-powered prosthesis is still unable to hold an object exactly the way and for the time (s)he wants.
Given the state of the art of myoelectric control, even an expensive multi-fingered hand prosthesis such as Touch Bionics’s i-LIMB Quantum [9] can provide little more than a body-powered one-degree-of-freedom (1-DOF) gripper. The appropriateness of such expensive solutions, as opposed to less expensive and less dexterous, but more reliable body-powered 1-DOF grippers definitely comes into question.
The community is therefore looking for new solutions to improve the reliability of myocontrol; in particular, multi-modal sensing techniques to non-invasively gather the user’s intent [10,11,12] are being explored. One such idea is that of detecting the deformations that occur on the surface of the forearm/residual limb, denoting muscular activity, and in turn representing intended wrist, hand and finger activations. From now on, we will denote as activation any action voluntarily performed by a human subject, which is also the desired target of a hand/wrist prosthesis; for instance, wrist flexion implies the activation of some of the muscles in the forearm and should correspond to the flexion of the prosthetic wrist. This informal definition has the advantage that it abstracts away from whether it is a muscle isometric contraction or the result of a movement and from whether it is being performed by able-bodied or by amputated subjects. Amputees in particular can still perform muscle contractions, which obviously lead to no movement and no generation of torque. Pioneering work by, among others, Craelius [13] has already shown that relevant information can indeed be obtained by embedding a small number of force sensors in a silicon sleeve or a semi-rigid cuff. A straightforward extension to this idea is that of employing tactile sensing, which is a high-density form of force sensing; the resulting technique is called Tactile Myography (TMG), analogous to surface Electromyography (sEMG).
In the experiment we hereby report about, we applied TMG to 10 able-bodied participants, as well as to a trans-radial amputee, and assessed its effectiveness to discriminate (classify) several wrist, hand and finger activations. We developed a shape-conformable bracelet using our custom high-density tactile sensors [14]. The bracelet can be strapped around the forearm or the residual limb. During wrist, hand and finger activations, our bracelet yields a high-resolution image of the related forearm/residual limb deformations, which cause pressure changes that are captured by the tactile sensors. During two experiments, we associated the features extracted from such images with the images from a visual stimulus and fed them to classification algorithms. In the first experiment, ten able-bodied subjects were instructed to track the stimulus, which showed repeated, simple wrist, hand and finger activations. In the second experiment, a very similar protocol was requested from a left trans-radial amputated subject.
The experimental results reveal that the classification accuracy is on average 89.15% for the able-bodied subjects and 88.72% for the amputee for finger and wrist movements; such values are in line with, or superior to, what can be found in previous literature in similar experiments (see, e.g., [15]). The results obtained by the amputee are comparable to those obtained by the able-bodied subjects, a surprising fact given that the subject had been amputated eight years before the experiment and, during this time, has been using no prostheses at all.

Related Work

The need for novel sensors to improve the accuracy and stability of myoelectric control has been advocated multiple times (e.g., in [11,12]) as a substitute and/or complement to the more traditional technique of sEMG. There is no agreement so far as to which technique or combination of techniques yields the best overall results (Guo and colleagues, as well as Ravindra and Castellini, already discussed this topic [15,16]). There is a drive in the community to hold on to surface techniques, since invasive methods such as needle EMG and direct connection to the nerves are still not practical for clinical use [4]. One such idea, and a very practical one given its alleged ease of production, low cost and high effectiveness (so far just in the laboratories) consists of detecting the changes in the volume and position of the muscles (bulging) while human subjects move and exert forces. Early experiments using pneumatics to detect muscle bulges date back to as early as 1966 [17]. From about 2000 on, this very idea has been developed with various names and slightly different kinds of sensors and housings: in [13,18], pressure vector decoding or residual kinetic imaging is defined, employing myopneumatic (pressure) sensors embedded in a silicone sleeve or in a bracelet; in [19,20], Force Myography (FMG) or surface muscle pressure is introduced, which exploits the same principle, but using very affordable Force-Sensing-Resistors (FSRs). The same research group has recently applied FMG to gait control [20] and brain injury rehabilitation [21]. Gait control was also realized in this way in an earlier paper by Lukowicz et al. [22]. FMG has been further extensively developed, tested and compared against sEMG and ultrasound imaging [15], assessed when fused with sEMG [23], and tested on amputees [24] with good-to-excellent classification accuracy. All in all, detection of muscle bulging using a small number of force or pressure sensors is well established in controlled conditions.
We define Tactile Myography (TMG) as the extension of FMG to high density (high resolution). The most interesting examples of TMG, both realized via a resistive pressure-sensing approach, are found in [25] and in [26]. Radmand et al. [26] propose a 14 × 9 tactile array with a spatial resolution of 10 mm embedded in a rigid plastic socket and demonstrate excellent classification rates on able-bodied subjects. Furthermore, they collect data for several different body postures, an issue which is usually detrimental to any machine-learning-based myocontrol approach. (Notice that in Radmand et al.’s work, what we here denote as TMG is called HD-FMG for High-Density FMG. In this paper, we rather stick to the acronym TMG to denote this approach.) In the work by Kõiva et al. [25], which can be considered an early, preliminary version of this very paper, values of root-mean-squared error are obtained by TMG when regression is applied, which are in line with related literature employing FMG, sEMG and ultrasound imaging. TMG should show the same higher quality and stability of signals with respect to sEMG that FMG enjoys [15,23], with the advantage of potentially providing more information thanks to the higher resolution.
If we consider the output of a tactile sensor array as an image (i.e., each sensor output represents the intensity of a pixel), image processing algorithms can be applied to extract discriminative information from the pressure map. For example, Scale-Invariant Feature Transform (SIFT) is a well-established method designed for object recognition from images [27] that has been implemented in tactile-based object shape recognition [28]. Gradient-based features extracted from Regions of Interest (ROI) and Histograms of Oriented Gradients (HOG) have been utilized to discriminate finger positions from ultrasound images [29,30,31] and could potentially be applied to sensor arrays. In all these cases, the extracted features were used in combination with a classifier or a regression method to model the intent of the subject. Another approach is to learn the features and perform classification directly from the raw pressure maps, without handcrafted feature design. In this context, deep learning algorithms are becoming increasingly useful given their classification performance on image processing [32,33].

2. Materials and Methods

2.1. Experimental Setup

Two similar experiments were performed: the first involved the able-bodied subjects, whereas the second involved the amputated subject. The same experimental setup, described in detail in the following sections, was used for both experiments (see Figure 1). The setup was intentionally maintained as simple as possible, and in the end, it only consisted of the aforementioned tactile bracelet and of a monitor on which a visual stimulus was displayed.

2.1.1. Tactile Bracelet

The tactile bracelet (Figure 2) is composed of up to 10 tactile sensor boards and a single main board, which collects the tactile data from the sensor boards and optionally provides motion tracking capabilities via embedded inertial-measurement-unit. To enable optimal covering of different arm circumferences, the amount of connected sensor boards is variable by design.
Each tactile sensor is based on a resistive working principle in which the interface resistivity between two surfaces changes according to the applied load. This is achieved using conductive tracks as electrodes and conductive foam or rubber as the sensor material, which is a technique first introduced by Weiss and Wörn in [34]. Figure 2b illustrates this basic working principle and depicts the three parts that contribute to the sum of the final sensor resistance, R t : the variable contact interface resistances, R s 1 and R s 2 , and a constant sensor material volume resistance, R v . In its simplest form, the tactile sensor electrodes can be produced by using a common printed circuit board (PCB). A number of possible sensor materials can be considered, such as elastomer foam with added carbon particles (as used in typical electrostatic discharge (ESD) packaging foam), conductive fabrics and conductive rubber. For improved respiration of the skin and for optimal sensor characteristics, in the bracelet, we opted for porous conductive foam. In the first experiment, 6 mm-thick polyurethane foam (Warmbier 4451.W [35]) was used; in the second, we opted for thinner (3 mm), but stiffer and thus more robust polyolefin foam (Polyform PE PF554 [36]).
Each sensor board in the tactile bracelet had 4 × 8 tactile pixels (taxels) in a 5-mm grid that were sampled using two 16-channel 12-bit resolution analog-to-digital converters (ADCs) in parallel (Figure 3). The digitalized data were provided internally on an Serial Peripheral Interface (SPI) bus. With a board width of 20 mm, an adult can typically fit all 10 sensor boards around the circumference of the arm, resulting in a high resolution tactile image of 10 × 4 × 8 = 320 taxels. The data of the bracelet is streamed out over USB using virtual serial port communication (USB-CDC). The tactile data from the bracelet were captured at 80 frames per second. The host software rearranges the stream of incoming taxel data and can visualize it as a two-dimensional tactile array. A wide double-sided hook-and-loop band was selected for mounting the sensor and data collector boards around an arm. This made quick individual sensor positioning and overall bracelet circumference and tension adjustment possible, while simultaneously providing a sturdy attachment. Appropriate fasteners were designed and 3D printed to attach the printed circuit boards to the hook-and-loop band.

2.1.2. Visual Stimulus

Each participant was asked to try to mimic a sequence of wrist/hand/finger activations performed by a realistic 3D hand model on a computer monitor. The model anatomically matches a human hand and enforces independent flexion of each finger, as well as the activation of the three DOFs of the wrist, with values normalized between 0 and 1. Each DOF activation rose from 0 to 1 and back with a sinusoidal pattern, realistically simulating the flexion of fingers, the rotation of the thumb, the flexion of the wrist, and so on (for our purposes, positive and negative activations of the wrist were treated as separate activations, e.g., the wrist flexion would be treated as a different activation from the wrist extension, since different muscle groups are required to perform them).
The values of the DOF activations of the visual stimulus were used as the ground truth against which the tactile features were matched. In our study, we only used static hand positions. Data collected during the transition and adaptation phases between one activation and the following were not considered in the analysis. This is an instance of the so-called realistic setup or on-off goal-directed stimulus, already successfully employed, e.g., in [30,37]. This method has the drawback of potentially reducing the precision in the prediction of the intended activations due to the delay required by the subject to adapt; nevertheless, it is an accepted way to associate an intended activation with a specific input signal pattern; in the case of amputees, it actually is the only possible way, since amputees cannot produce a reliable ground truth in principle.
Figure 4a depicts the timing of the visual stimulus, as well as the data recording.

2.2. Participants

Ten able-bodied participants (two left-handed and eight right handed, 27.3 ± 5.66 years old) and one trans-radial left hand male amputee were recruited for this study. The amputee was 68 years old and underwent surgery in 2007 following a work-related incident involving explosive materials. He reported no usage of any prosthesis whatsoever since the amputation and stated that he could distinctly move each phantom finger without phantom-limb pain. The residual limb was about 20.5 cm long and was in good condition (no skin rash, no abnormal redness, no reported pain or exfoliation). All participants were informed, both in writing and orally, about the procedure and possible risks and gave written informed consent before each experiment began. The experiments were performed according to the World Medical Association (WMA) Declaration of Helsinki and were preliminarily approved by the Work Council of the German Aerospace Center (DLR).

2.3. Experimental Protocol

Each able-bodied participant would sit comfortably in front of the monitor showing the visual stimulus. The experiment consisted of ten repetitions of a sequence of six activations each, namely rotation (i.e., opposition) of the thumb, flexion of the index finger, flexion of the little finger, flexion of the wrist, extension of the wrist and supination of the wrist (Figure 4b). This specific choice of activations was motivated by the following three factors: (a) to experiment with activations that are routinely encountered in daily living; and at the same time, (b) to keep the duration of the experiment reasonably short for both able-bodied and amputated participants; and to select (c) the activations that could correspond to the DOFs available in current commercial multi-fingered prosthetic hands that can connect with prosthetic wrists. In order to produce activations that would require a reasonable amount of force/torque, the participants were instructed to press with their fingers on the bare table (thumb rotation, index flexion and little finger flexion) and to simply flex, extend and supinate the wrist to their limit with the arm lifted from the table. The preliminary results of this experiment, as well as more details on the experimental setup are described in a previous publication [25].
The amputee followed a similar experimental protocol. He was introduced to the experiment, seated comfortably in front of the monitor as shown in Figure 1 and was similarly asked to mimic the visual stimulus. The protocol consisted of three trials, each consisting of, in turn, five repetitions of the same sequence of activations as described previously for the able-bodied participants (Figure 4b). The first trial was performed with the residual limb, i.e., the bracelet was placed on the participant’s residual limb, and he was asked to try to perform the activations seen on the screen with his residual limb; the second trial was performed with his intact limb and was therefore a shortened version of the protocol for able-bodied subjects; lastly, the third trial was again performed with the residual limb. This protocol is motivated, on the one hand, by the need to keep the amputee’s protocol as similar as possible to that of the able-bodied subjects, in order to provide comparable results; on the other hand, we wanted to check if any learning effect would appear between the first and the third trial. The complete experiment lasted about 30 min.

2.4. Data Analysis

Tactile data were acquired from the tactile bracelet at a sampling rate of 80 samples per second; the values of the visual stimulus were synchronized by linearly interpolating the timestamps of the respective data channels. The data from the tactile sensors were filtered with a 1st-order Butterworth bandpass filter with cutoff frequencies at 0.01 and 1 Hz to remove high-frequency disturbances, heart rate and signal drift due to memory effects of the foam. Figure 4c shows a typical pattern obtained for each movement required in the experimental protocol (average of all movements for one able-bodied subject).
Different methods were applied for feature extraction, including Harris corner extraction [38], the structural similarity index [39] on bicubic interpolated data and Region of Interest (RoI) gradients [30,40,41], which yielded the highest classification accuracies in a preliminary round of experiments. As opposed to the RoIs used in [30,40], which were round-shaped and overlapped one another by about 10%, in this case, due to the lower resolution of the tactile data with respect to the ultrasound images used in those papers, we adopted a simpler strategy, defining each RoI as a non-overlapping 4 × 4 taxel square. Then, like in the aforementioned papers, for each RoI, we computed three parameters of interest α , β and γ linearly approximating the taxel intensities. (More in detail, for each RoI i, α i and β i represent the mean intensity gradient along the x and y axis, while γ i is an intensity offset. Further details can be found in [30].) The feature extraction method was uniform for all subjects and, once again, in line with previous references, was not targeted at any anatomical feature. This reduces the preparation time of the experiment.
A portion of the collected data was then reduced to three dimensions using principal component analysis (PCA) and visualized for qualitative assessment (see Figure 5, where samples collected during each stimulated movement are visualized in different colors; notice that in some cases, a sample set is not visible since its cluster gets overshadowed by other ones; that is the case of thumb rotation in Figure 5e for instance). The more separated the clusters of data appear in three dimensions, the better are the classification results; in particular, wrist movements are supposed to be more distinguishable from one another in the feature space than when also considering finger movements.
For comparison, graphs in the first column (Figure 5a,c,e) show the full set of movements, while those in the second column (Figure 5b,d,f) show only the wrist movements. Figure 5a,b show data gathered from the able-bodied subject with the most separable classes (Fisher’s separateness index); Figure 5c,d show those obtained from the able-bodied subject least separable classes; and Figure 5e,f show those obtained from the amputee during the third trial.
The qualitative examination of these graphs indicated that the wrist movements appeared well separated in all cases (even for the worst cases) and that the finger movements tended to cluster worse than the wrist movements. Furthermore, the data collected from the amputee during the third trial were hardly distinguishable from those of the able-bodied subjects.
Given this analysis, we chose to use two very simple classification methods, namely a k-Nearest-Neighbors classifier (k-NN) using the Euclidean distance and the Nearest-Cluster-Centroid classifier (NCC) using both the Euclidean and the Mahalanobis distances. The choice of k-NN and NCC was substantiated by their simplicity (in comparison to Artificial Neural Networks (ANN)) and fast training, which can be easily implemented for on-line analysis. For the k-NN classifier, we chose k = 1 , which had the highest accuracy results across k-values between 1 and 10. The classes to be discriminated were either the seven movements (thumb rotation, index flexion, flexion of the little finger, wrist flexion, wrist extension, wrist supination and rest) or the four wrist movements (wrist flexion, wrist extension, wrist supination and rest). Since we had 20 RoIs on each tactile image and three features were extracted from each RoI, each image sample was represented by a 60-dimensional feature vector.
For the first experiment, only the last five repetitions of the movement sequence were used for further analysis (the first prototype of the tactile bracelet with a soft foam was more susceptible to the memory effect of the foam). The three selected classifiers (k-NN, NCC with Euclidean distance and NCC with Mahalanobis distance) were trained using leave-one-repetition-out cross-validation for each subject. The training set was composed of four repetitions and tested on the last repetition. As the resting position was performed more often than the other movements, the mean and standard deviation of the balanced accuracy were computed for each subject.
In the second experiment, the same classifiers used in the first experiment were applied on the three trials performed by the amputee. For each trial, leave-one-repetition-out cross-validation was applied, using 4 repetitions as the training set and one as the testing set. The mean and standard deviation of the balanced accuracy were calculated for each trial.

3. Results

3.1. Experiment #1 (Able-Bodied Subjects)

Figure 6 shows in the first row the classification accuracy obtained in the first experiment. Figure 6a shows that the k-NN and the NCC with Euclidean distance outperformed the NCC with Mahalanobis distance in the classification of all movements for all able-bodied subjects. Only in three cases (Subjects 6, 7 and 10) did the k-NN yielded lower accuracies than the NCC with Euclidean distance. The accuracy of the k-NN varied between 70.40% and 100% with a mean accuracy of 89.15% over all subjects. To know if the difference in the performance of the classifiers is significant, Kruskal–Wallis one-way analysis of variance was performed as the mean accuracies of the different classifiers are not normally distributed. A significant difference was obtained between the three classifiers (H(30) = 12.26, p = 0.0022 ). For further analysis, a multiple comparison test was performed showing that the accuracy means over all able-bodied subjects using NCC with Mahalanobis distance are significantly different than the accuracy means over all subjects using k-NN or NCC with Euclidean distance. However, there is no significant difference between the results using k-NN and NCC with Euclidean distance.
The classification of wrist movements, including the resting position, showed high accuracies overall (Figure 6b). The k-NN seems to perform better than the other classifiers, obtaining an accuracy between 95.13% and 100%. The mean accuracy over all subjects was 98.44% for k-NN, 97.77% for NCC with Euclidean distance and 94.53% for NCC with Mahalanobis distance. However, the Kruskal–Wallis one-way analysis of variance showed in this case that there is no significant difference in the performance of the different classifiers (H(30) = 2.41, p = 0.3 ).

3.2. Experiment #2 (Amputee)

The second row of Figure 6 shows the classification accuracy obtained in the second experiment, i.e., on the data obtained from the amputee. Figure 6c shows that the highest accuracies applied to all movements were obtained in the third trial, which was performed with the amputated arm. The mean accuracy of the k-NN was 84.37% in Trial 1 and 84.40% in Trial 2. The same classifier achieved 93.07% mean accuracy in the third trial.
Similarly to the results obtained from the able-bodied subjects, the classification of the wrist movements was more accurate (Figure 6d). In this case, the NCC with Mahalanobis distance performed better in the first trial, but worse than the other classifiers considered in Trials 2 and 3. The mean accuracy results of the k-NN for the intact arm in Trial 2 were 100% and 99.18% for the amputated arm in Trial 3. For this experiment, no statistical analysis was performed as we only collected data from one amputee.

4. Discussion

In this paper, we have proposed a further investigation into the usage of Tactile Myography (TMG) for dexterous myocontrol, as a substitute or a companion to surface EMG. A few remarks are necessary.

4.1. Tactile Myography for Myocontrol

Myocontrol is the usage of bodily signals related to muscle activation to control assistive or rehabilitation devices, prostheses being one of the paradigmatic examples [10]. As happens with machine learning (ML) in general, ML-based myocontrol will work reliably only if repeatable and distinct signal patterns for each desired action can be determined [42]. To this aim, the limitations of sEMG are well known in the scientific community (see, e.g., [12]): besides being prone to problems affecting the physical nature of the sensors (e.g., electrode displacement and lift-off), electromyography suffers from the changes in the signal, which can be observed whenever muscle fatigue appears and/or due to sweating, which alters the conductivity of the skin.
All in all, a question arises whether TMG is a valid replacement or a companion to sEMG. It seems reasonable to claim that detecting deformations of the residual limb via pressure (using TMG and its low-resolution counterpart, FMG) is not affected by conductivity of the skin or by fatigue, both inherent problems to the very nature of electromyography [20,23]. The other problems remain on the table, bound to the inevitable necessity that pressure/force sensors stay in place and in contact with the skin. Besides this, force/pressure sensing enables the detection of contact forces, accelerations and orientations, as well as the deformations in the subject’s body, which are due to skeletal structures other than muscles (e.g., tendons). Such factors could be problematic for the detection of activation patterns, but could also represent further valuable information to be exploited.
A thorough study comparing sEMG and TMG/FMG both from the point of view of the signals generated and as a means for myocontrol, including the usage of either/both techniques in real life, has yet to appear. Nevertheless, there are strong hints pointing at the feasibility of this marriage: the potential is great. In particular, in [23], the signals of FMG and sEMG were qualitatively and quantitatively compared, showing clearly a higher stability in time (less oscillatory behavior), and that FMG signals are better separated in the input space. Furthermore, in [24], an extensive study on four trans-radial amputees was performed, showing that eight FSR sensors suffice to reach classification accuracy values that are comparable with the standard found in the literature for sEMG. Notice, anyway, that both aforementioned studies employ FMG rather than TMG, that is, they rely on a few force-sensing resistors scattered on the forearm of able-bodied subjects or embedded inside a socket for amputees. On the other hand, as far as proper TMG is concerned, Radmand et al. [26] have already shown that TMG “footprints” or “images” can be used to classify a relatively high number of hand and wrist activations with remarkable precision. Their experiment was performed on 10 able-bodied subjects, using a tactile sensor that has roughly half the density as the one we present, and is rigid.
A direct comparison of sEMG and TMG in conditions similar to those described here is not available at the moment as far as we know; but in [43], such a comparison/combination of techniques during an online task was carried out, in which TMG was shown to outperform sEMG.

4.2. What This Study Shows

To the best of our knowledge, this is the first study in which TMG (as opposed to FMG) has been applied to an amputee, albeit for a short time and in lab-controlled conditions. The results however show that high rates of classification can be obtained; moreover, in very similar conditions from a pool of able-bodied subjects, our study showed that the amputee’s results were comparable to those of the able-bodied subjects. In particular, the tactile bracelet we have presented, having 320 taxels with a spatial resolution of 5 mm, was adapted to the forearm of 10 able-bodied subjects, as well as to the residual limb of a left trans-radial amputee. It is worthwhile to note that we employed no signal windowing at all: we used the signals as they came out of the device, with a first order bandpass filter; therefore, the delay between the data acquisition and the feature extraction was negligible. On top of this, the optimal combination of features and classifier is the simplest and fastest one, which lets us claim that an online prediction would work with a barely noticeable delay between the intent of the subject and the prediction itself. Of course, this is the subject of future assessment as we have no online study so far.
Notice that the experiment we have reported about in this paper cannot yet indicate how well TMG would work in practice. The accuracy results obtained here might be insufficient; even values around 98% would in some cases not be enough [6]. Moreover, the experimental protocol only involved standard movements (i.e., no daily-living activity tasks), offline analysis and no additional weight on the forearm when performing the movements, as would be the case while grasping. All tests were performed on five repetitions only of the selected actions, which is hardly a good measure of their variability in practice; this could also have influenced the choice of the best feature/classifier combination.
Engaged in a simple, repetitive experiment involving muscle activations, both able-bodied subjects and a trans-radial amputee produced corresponding tactile patterns, which led to a simple classifier to obtain high accuracy percentages, ranging from 70%–100% across all trials. These values are in line with, or even superior to, those found in the literature while using other techniques, among which is the traditional choice for myocontrol, sEMG (see, e.g., [15] for an offline comparison among such human-machine interfaces). In an excellent recent survey by Fang et al. [44], one sees that classification results obtained from offline data gathered from a similar population of amputated subjects (that is, trans-radials) hover in the best cases from 90% to close to 100%. Such numbers are as well in agreement with the already mentioned articles by Cho et al. [24] and Radmand et al. [26]. A remark is necessary here though: one must be careful with such results, since it is well known that off-line abstract performance rarely translates to practical usability [45]. See the Future Work subsection for more about this.
We were able to distinguish single-finger activations and even discern thumb rotation, which is even more interesting, since the muscles operating this activation are inside the hand. We speculate that synergistic activations of non-intrinsic muscles might be responsible for this surprising result; in the amputated subject, spontaneous re-innervation could also play a role. Let us remark that each residual limb is unique, therefore high-resolution approaches such as TMG could definitely help identify and explore the spots with the highest remaining activity.
The comparison of features and classification methods we performed show that the simplest features and classifier works best. This is also visually clear if one inspects the PCA-reduced patterns of Figure 5. Not surprisingly, wrist movements were easier to classify, whereas introducing finger movements decreased the classification accuracy. The results also show that the principle enforced by the tactile bracelet works to a good extent in the case of an amputated subject. To the best of our knowledge, this is the first attempt to assess the usefulness and feasibility of TMG including an amputated subject. Moreover, the amputated subject’s patterns are qualitatively indistinguishable from those produced by the able-bodied subjects (Figure 5). The results suggest a learning effect (Figure 6 second row, improving from about 84%–92%) in the case of the amputated subject.

4.3. Final Remarks and Future Work

In this work, we have been able to test only one amputated subject. We then put forward two remarks about his results, without any pretense to statistical validity. On the one hand, our subject had not been exposed to any such experiment before and used no prosthesis whatsoever since the operation, which happened eight years before the experiment. Given the long time of no usage of the muscles in the residual limb and the fact that he was not familiar with prosthetic devices, we find his performance surprising. On the other hand, he claimed that he could feel and move his phantom fingers, which might have helped him produce the required distinct movements. An analysis on more amputated subjects, together with a screening of their physiological characteristics, must be performed in order to get a clearer picture.
Regarding TMG in itself, a long-term evaluation is necessary to gather conclusive information about the durability of the proposed tactile sensor system and the effects of sweating on the material over longer time periods. TMG, which is high-resolution force sensing, provides many signals gathered from a relatively small surface and may at first sight seem highly redundant; but one must imagine, in the final setup, that TMG would be applied on a residual limb whose muscle morphology is, in general, unknown. That is where, we believe, the high resolution of TMG could help gain a better accuracy and reliability with respect to FMG. Of course in that case, the control system must be apt to work seamlessly with highly dimensional samples.
We argue that even more importantly, it is necessary to develop a novel way of assessing the feasibility of an approach, be it mono- or multi-modal, which takes into account the complexity of the situations an amputee encounters in real life. Each situation potentially induces changes in the signals recorded by the interface, which must be taken into account [46]. Therefore, a specific training-and-testing protocol must be devised, through which the effectiveness and usefulness of novel approaches can be seriously assessed. Given this requirement, our main line of future research goes in the direction of determining the optimal set of sensors through which to realize an ever-improving myocontrol.

Acknowledgments

We gratefully acknowledge Andreas Arkudas of the Plastic and Hand Surgery Department of the University Hospital of Erlangen for recruiting the amputee who took part in this study. This work was partially supported by the German Research Foundation (DFG) project TACT-Hand and by the DFG Center of Excellence EXC 277: Cognitive Interaction Technology (CITEC). Björn Eskofier gratefully acknowledges the support of the DFG within the framework of the Heisenberg professorship program (grant number ES 434/8-1).

Author Contributions

Claudio Castellini contributed to the experiment design, the data collection and writing and coordinated the work. Risto Kõiva conceived of, built and tested the setup and contributed to the writing. Cristian Pasluosta contributed to the data analysis and the writing. Carla Viegas tested the setup, designed the experiment, collected the data, performed the analysis and contributed to the writing. Björn M. Eskofier contributed to the writing and coordinated the work.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Biddiss, E.; Chau, T. Upper-limb prosthetics: Critical factors in device abandonment. Am. J. Phys. Med. Rehabil. 2007, 86, 977–987. [Google Scholar] [CrossRef] [PubMed]
  2. Peerdeman, B.; Boere, D.; Witteveen, H.; in ’t Veld, R.H.; Hermens, H.; Stramigioli, S.; Rietman, H.; Veltink, P.; Misra, S. Myoelectric forearm prostheses: State of the art from a user-centered perspective. J. Rehabil. Res. Dev. 2011, 48, 719–738. [Google Scholar] [CrossRef] [PubMed]
  3. Engdahl, S.M.; Christie, B.P.; Kelly, B.; Davis, A.; Chestek, C.A.; Gates, D.H. Surveying the interest of individuals with upper limb loss in novel prosthetic control techniques. J. NeuroEng. Rehabil. 2015, 12, 1–11. [Google Scholar] [CrossRef] [PubMed]
  4. Micera, S.; Carpaneto, J.; Raspopovic, S. Control of hand prostheses using peripheral information. IEEE Rev. Biomed. Eng. 2010, 3, 48–68. [Google Scholar] [CrossRef] [PubMed]
  5. Maat, B.; Smit, G.; Plettenburg, D.; Breedveld, P. Passive prosthetic hands and tools: A literature review. Prosthet. Orthot. Int. 2018, 42, 66–74. [Google Scholar] [CrossRef] [PubMed]
  6. Schweitzer, W.; Thali, M.J.; Egger, D. Case-study of a user-driven prosthetic arm design: Bionic hand versus customized body-powered technology in a highly demanding work environment. J. NeuroEng. Rehabil. 2018, 15, 1. [Google Scholar] [CrossRef] [PubMed]
  7. Østlie, K.; Lesjø, I.M.; Franklin, R.J.; Garfelt, B.; Skjeldal, O.H.; Magnus, P. Prosthesis rejection in acquired major upper-limb amputees: A population-based survey. Disabil. Rehabil. Assist. Technol. 2012, 7, 294–303. [Google Scholar]
  8. Kyberd, P.J.; Hill, W. Survey of upper limb prosthesis users in Sweden, the United Kingdom and Canada. Prosthet. Orthot. Int. 2011, 35, 234–241. [Google Scholar] [CrossRef] [PubMed]
  9. i-LIMB Quantum. Available online: www.touchbionics.com/products/active-prostheses/i-limb-quantum (accessed on 22 March 2018).
  10. Fougner, A.; Stavdahl, Ø.; Kyberd, P.J.; Losier, Y.G.; Parker, P.A. Control of Upper Limb Prostheses: Terminology and Proportional Myoelectric Control - A Review. IEEE Trans. Neur. Syst. Rehab. Eng. 2012, 20, 663–677. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Jiang, N.; Dosen, S.; Müller, K.R.; Farina, D. Myoelectric control of artificial limbs: Is there the need for a change of focus? IEEE Signal Process. Mag. 2012, 29, 149–152. [Google Scholar] [CrossRef]
  12. Castellini, C.; Artemiadis, P.; Wininger, M.; Ajoudani, A.; Alimusaj, M.; Bicchi, A.; Caputo, B.; Craelius, W.; Dosen, S.; Englehart, K.; et al. Proceedings of the first workshop on Peripheral Machine Interfaces: Going beyond traditional surface electromyography. Front. Neurorobot. 2014, 8, 22. [Google Scholar] [CrossRef] [PubMed]
  13. Curcie, D.J.; Flint, J.A.; Craelius, W. Biomimetic finger control by filtering of distributed forelimb pressures. IEEE Trans. Neural Syst. Rehabil. Eng. 2001, 9, 69–75. [Google Scholar] [CrossRef] [PubMed]
  14. Schürmann, C.; Kõiva, R.; Haschke, R.; Ritter, H. A modular high-speed tactile sensor for human manipulation research. In Proceedings of the IEEE World Haptics Conference (WHC), Istanbul, Turkey, 21–24 June 2011; pp. 339–344. [Google Scholar]
  15. Ravindra, V.; Castellini, C. A comparative analysis of three non-invasive human-machine interfaces for the disabled. Front. Neurorobot. 2014, 8, 24. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Guo, J.Y.; Zheng, Y.P.; Kenney, L.P.J.; Bowen, A.; Howard, D.; Canderle, J.J. A comparative evalaution of sonomyography, electromyography, force, and wrist angle in a discrete tracking task. Ultrasound Med. Biol. 2011, 37, 884–891. [Google Scholar] [CrossRef] [PubMed]
  17. Lucaccini, L.F.; Kaiser, P.K.; Lyman, J. The French electric hand: Some observations and conclusions. Bull. Prosthet. Res. 1966, 10, 31–51. [Google Scholar]
  18. Phillips, S.L.; Craelius, W. Residual Kinetic Imaging: A Versatile Interface for Prosthetic Control. Robotica 2005, 23, 277–282. [Google Scholar] [CrossRef]
  19. Wininger, M.; Kim, N.; Craelius, W. Pressure signature of forearm as predictor of grip force. J. Rehabil. Res. Dev. (JRRD) 2008, 45, 883–892. [Google Scholar] [CrossRef]
  20. Yungher, D.A.; Wininger, M.T.; Barr, J.; Craelius, W.; Threlkeld, A.J. Surface muscle pressure as a measure of active and passive behavior of muscles during gait. Med. Eng. Phys. 2011, 33, 464–471. [Google Scholar] [CrossRef] [PubMed]
  21. Yungher, D.; Craelius, W. Improving fine motor function after brain injury using gesture recognition biofeedback. Disabil. Rehabil. Assist. Technol. 2012, 7, 464–468. [Google Scholar] [CrossRef] [PubMed]
  22. Lukowicz, P.; Hanser, F.; Szubski, C.; Schobersberger, W. Detecting and Interpreting Muscle Activity with Wearable Force Sensors. In Pervasive Computing; Fishkin, K.P., Schiele, B., Nixon, P., Quigley, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 3968, pp. 101–116. [Google Scholar]
  23. Connan, M.; Ruiz Ramírez, E.; Vodermayer, B.; Castellini, C. Assessment of a wearable force- and electromyography device and comparison of the related signals for myocontrol. Front. Neurorobot. 2016, 10. [Google Scholar] [CrossRef] [PubMed]
  24. Cho, E.; Chen, R.; Merhi, L.K.; Xiao, Z.; Pousett, B.; Menon, C. Force Myography to Control Robotic Upper Extremity Prostheses: A Feasibility Study. Front. Bioeng. Biotechnol. 2016, 4, 18. [Google Scholar] [CrossRef] [PubMed]
  25. Kõiva, R.; Riedenklau, E.; Viegas, C.; Castellini, C. Shape conformable high spatial resolution tactile bracelet for detecting hand and wrist activity. In Proceedings of the ICORR—International Conference on Rehabilitation Robotics, Singapore, 11–14 August 2015; pp. 157–162. [Google Scholar]
  26. Radmand, A.; Scheme, E.; Englehart, K. High-density force myography: A possible alternative for upper-limb prosthetic control. J. Rehabil. Res. Dev. 2016, 53, 443–456. [Google Scholar] [CrossRef] [PubMed]
  27. Lowe, D.G. Object Recognition from Local Scale-Invariant Features. In Proceedings of the International Conference on Computer Vision, Corfu, Greece, 20–25 September 1999. [Google Scholar]
  28. Luo, S.; Mou, W.; Althoefer, K.; Liu, H. Novel Tactile-SIFT Descriptor for Object Shape Recognition. IEEE Sens. 2015, 15, 5001–5009. [Google Scholar] [CrossRef]
  29. Castellini, C.; Passig, G.; Zarka, E. Using ultrasound images of the forearm to predict finger positions. IEEE Trans. Neural Syst. Rehabil. Eng. 2012, 20, 788–797. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Sierra González, D.; Castellini, C. A realistic implementation of ultrasound imaging as a human-machine interface for upper-limb amputees. Front. Neurorobot. 2013, 7, 17. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Ortenzi, V.; Tarantino, S.; Castellini, C.; Cipriani, C. Ultrasound Imaging for Hand Prosthesis Control: A Comparative Study of Features and Classification Methods. In Proceedings of the ICORR—International Conference on Rehabilitation Robotics, Singapore, 11–14 August 2015; pp. 1–6. [Google Scholar]
  32. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q., Eds.; pp. 1097–1105. [Google Scholar]
  33. Jones, N. The learning machines. Nature 2014, 505, 146–148. [Google Scholar] [CrossRef] [PubMed]
  34. Weiss, K.; Wörn, H. The working principle of resistive tactile sensor cells. In Proceedings of the IEEE International Conference Mechatronics and Automation (ICMA), Niagara Falls, ON, Canada, 29 July–1 August 2005; Volume 1, pp. 471–476. [Google Scholar]
  35. Wolfgang Warmbier GmbH & Co. KG. Available online: www.warmbier.com (accessed on 22 March 2018).
  36. Polyform Kunststofftechnik GmbH & Co. Betriebs KG. Available online: www.polyform.de (accessed on 22 March 2018).
  37. Strazzulla, I.; Nowak, M.; Controzzi, M.; Cipriani, C.; Castellini, C. Online Bimanual Manipulation Using Surface Electromyography and Incremental Learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, in press. [Google Scholar] [CrossRef] [PubMed]
  38. Harris, C.; Stephens, M. A combined corner and edge detector. In Proceedings of the 4th Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; Volume 15, p. 50. [Google Scholar]
  39. Boschmann, A.; Platzner, M. A Computer Vision-Based Approach to High Density EMG Pattern Recognition Using Structural Similarity; University of New Brunswick’s Myoelectric Controls/Powered Prosthetics Symposium (MEC): Fredericton, NB, Canada, 2014. [Google Scholar]
  40. Castellini, C.; Passig, G. Ultrasound image features of the wrist are linearly related to finger positions. In Proceedings of the IROS—International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 2108–2114. [Google Scholar]
  41. De Oliveira Viegas, C.L. Tactile-Based Control of a Dexterous Hand Prosthesis. Master’s thesis, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany, 2016. [Google Scholar]
  42. Powell, M.A.; Thakor, N.V. A Training Strategy for Learning Pattern Recognition Control for Myoelectric Prostheses. J. Prosthet. Orthot. 2013, 25, 30–41. [Google Scholar] [CrossRef] [PubMed]
  43. Jaquier, N.; Connan, M.; Castellini, C.; Calinon, S. Combining electro- and tactile myography to improve hand and wrist activity detection in prostheses. MDPI Technol. 2017, 5, 64. [Google Scholar]
  44. Fang, Y.; Hettiarachchi, N.; Zhou, D.; Liu, H. Multi-Modal Sensing Techniques for Interfacing Hand Prostheses: A Review. IEEE Sen. J. 2015, 15, 6065–6076. [Google Scholar] [CrossRef]
  45. Jiang, N.; Vujaklija, I.; Rehbaum, H.; Graimann, B.; Farina, D. Is Accurate Mapping of EMG Signals on Kinematics Needed for Precise Online Myoelectric Control? IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 549–558. [Google Scholar] [CrossRef] [PubMed]
  46. Castellini, C.; Bongers, R.M.; Nowak, M.; van der Sluis, C.K. Upper-limb prosthetic myocontrol: Two recommendations. Front. Neurosci. 2015, 9, 496. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Experimental setup. The tactile bracelet was wrapped around the subject’s residual limb (around the forearm in the case of able-bodied subjects); a 3D hand model was shown to the subject on an extra monitor; data visualization, stimulus control and data recording were performed on a laptop. Image included with consent.
Figure 1. Experimental setup. The tactile bracelet was wrapped around the subject’s residual limb (around the forearm in the case of able-bodied subjects); a 3D hand model was shown to the subject on an extra monitor; data visualization, stimulus control and data recording were performed on a laptop. Image included with consent.
Technologies 06 00038 g001
Figure 2. (a) The tactile bracelet prototype with 10 tactile sensor modules. Soft conductive elastomer material (foam) combined with a shape-conformable modular design was designed to make the bracelet comfortable to wear. (b) Working principle of the tactile bracelet sensors: the resistance of a single resistive cell, measured between the two electrodes, is the sum of the foam volumetric resistance and the contact resistances between the foam and the electrodes. The resistance changes according to the load applied to the foam.
Figure 2. (a) The tactile bracelet prototype with 10 tactile sensor modules. Soft conductive elastomer material (foam) combined with a shape-conformable modular design was designed to make the bracelet comfortable to wear. (b) Working principle of the tactile bracelet sensors: the resistance of a single resistive cell, measured between the two electrodes, is the sum of the foam volumetric resistance and the contact resistances between the foam and the electrodes. The resistance changes according to the load applied to the foam.
Technologies 06 00038 g002
Figure 3. A single sensor module printed circuit board (PCB) with 32 tactile cells. (a) The electrode side showing the 4 × 8 arrangement of the M-shaped electrodes in a 5 mm grid. The non-conductive areas on the left and on the right of the electrode grid are reserved for attachment of the sensor elastomer with a double-sided tape. (b) The digitization circuitry is located directly on the backside to keep the analog signal path at a minimum.
Figure 3. A single sensor module printed circuit board (PCB) with 32 tactile cells. (a) The electrode side showing the 4 × 8 arrangement of the M-shaped electrodes in a 5 mm grid. The non-conductive areas on the left and on the right of the electrode grid are reserved for attachment of the sensor elastomer with a double-sided tape. (b) The digitization circuitry is located directly on the backside to keep the analog signal path at a minimum.
Technologies 06 00038 g003
Figure 4. (a) Schematic representation of the sequence of the visual stimuli and the data collection timing. (b) The sequence of stimuli. (c) Example of averaged tactile patterns per movement of one able-bodied subject. Tactile Modules 1–4 were positioned on the ventral part of the forearm, while Modules 5–10 covered the dorsal part. Higher forces applied to sensors are depicted in blue, while lower forces are shown in red (reproduced and adapted with permission from [25]).
Figure 4. (a) Schematic representation of the sequence of the visual stimuli and the data collection timing. (b) The sequence of stimuli. (c) Example of averaged tactile patterns per movement of one able-bodied subject. Tactile Modules 1–4 were positioned on the ventral part of the forearm, while Modules 5–10 covered the dorsal part. Higher forces applied to sensors are depicted in blue, while lower forces are shown in red (reproduced and adapted with permission from [25]).
Technologies 06 00038 g004
Figure 5. Three-dimensional principal component analysis (PCA)-reduced visualization of the samples obtained from the able-bodied subjects with the highest (a,b) and lowest cluster separateness (c,d) and from the third trial of the amputee (e,f). (a,c,e) show the full set of stimulated movements, while (b,d,f) only show wrist-related movements and the resting position. (notice that the PCA coefficients were recalculated from the left to the right column).
Figure 5. Three-dimensional principal component analysis (PCA)-reduced visualization of the samples obtained from the able-bodied subjects with the highest (a,b) and lowest cluster separateness (c,d) and from the third trial of the amputee (e,f). (a,c,e) show the full set of stimulated movements, while (b,d,f) only show wrist-related movements and the resting position. (notice that the PCA coefficients were recalculated from the left to the right column).
Technologies 06 00038 g005
Figure 6. Classification accuracies obtained by all able-bodied subjects (a,b) and the amputee (c,d) using different classifiers. NCC, Nearest-Cluster-Centroid.
Figure 6. Classification accuracies obtained by all able-bodied subjects (a,b) and the amputee (c,d) using different classifiers. NCC, Nearest-Cluster-Centroid.
Technologies 06 00038 g006

Share and Cite

MDPI and ACS Style

Castellini, C.; Kõiva, R.; Pasluosta, C.; Viegas, C.; Eskofier, B.M. Tactile Myography: An Off-Line Assessment of Able-Bodied Subjects and One Upper-Limb Amputee. Technologies 2018, 6, 38. https://doi.org/10.3390/technologies6020038

AMA Style

Castellini C, Kõiva R, Pasluosta C, Viegas C, Eskofier BM. Tactile Myography: An Off-Line Assessment of Able-Bodied Subjects and One Upper-Limb Amputee. Technologies. 2018; 6(2):38. https://doi.org/10.3390/technologies6020038

Chicago/Turabian Style

Castellini, Claudio, Risto Kõiva, Cristian Pasluosta, Carla Viegas, and Björn M. Eskofier. 2018. "Tactile Myography: An Off-Line Assessment of Able-Bodied Subjects and One Upper-Limb Amputee" Technologies 6, no. 2: 38. https://doi.org/10.3390/technologies6020038

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop