Next Article in Journal
Acoustic Radiation Prediction Model Rationality and Mechanism of Steel-Spring Floating-Slab Tracks on Bridges
Previous Article in Journal
Experimental Testing and Residual Performance Evaluation of Existing Hangers with Steel Pipe Protection Taken from an In-Service Tied-Arch Bridge
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improving Motion Intention Recognition for Trans-Radial Amputees Based on sEMG and Transfer Learning

School of Information Science and Technology, Dalian Maritime University, 1 Linghai Road, Ganjingzi District, Dalian 116026, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(19), 11071; https://doi.org/10.3390/app131911071
Submission received: 4 September 2023 / Revised: 28 September 2023 / Accepted: 7 October 2023 / Published: 8 October 2023
(This article belongs to the Topic Artificial Intelligence in Healthcare - 2nd Volume)

Abstract

:
Hand motion intentions can be detected by analyzing the surface electromyographic (sEMG) signals obtained from the remaining forearm muscles of trans-radial amputees. This technology sheds new light on myoelectric prosthesis control; however, fewer signals from amputees can be collected in clinical practice. The collected signals can further suffer from quality deterioration due to the muscular atrophy of amputees, which significantly decreases the accuracy of hand motion intention recognition. To overcome these problems, this work proposed a transfer learning strategy combined with a long-exposure-CNN (LECNN) model to improve the amputees’ hand motion intention recognition accuracy. Transfer learning can leverage the knowledge acquired from intact-limb subjects to amputees, and LECNN can effectively capture the information in the sEMG signals. Two datasets with 20 intact-limb and 11 amputated-limb subjects from the Ninapro database were used to develop and evaluate the proposed method. The experimental results demonstrated that the proposed transfer learning strategy significantly improved the recognition performance ( 78.1 % ± 19.9 % , p-value < 0.005) compared with the non-transfer case ( 73.4 % ± 20.8 % ). When the source and target data matched well, the after-transfer accuracy could be improved by up to 8.5%. Compared with state-of-the-art methods in two previous studies, the average accuracy was improved by 11.6% (from 67.5% to 78.1%, p-value < 0.005) and 12.1% (from 67.0% to 78.1%, p-value < 0.005). This result is also among the best from the contrast methods.

1. Introduction

Electromyography (EMG) is a biological current generated by the contraction of muscles on the surface of the human body. The nervous system controls muscle activity (contraction or relaxation), and different muscle fiber motor units on the surface of the skin produce different signals at the same time. The electromyographic signals collected on the skin surface are called surface electromyographic signals (sEMG) [1]. Measuring the surface electromyographic signal has the advantage of being non-invasive [2] and it has been a popular method used for testing in medicine [3,4,5]. Furthermore, the sEMG signal is generally generated 30–150 ms ahead of the limb movement, and the movement can be judged in advance; therefore, the next action can be predicted by sEMG, and consequently, the sEMG prediction model has emerged. The surface electromyographic signal is one of the most used biological signals for motion intention prediction [6].
Hand gesture recognition based on sEMG signals is one of the main paradigms for prosthetic and rehabilitation device control [7]. Upper limb amputation brings a great inconvenience to trans-radial amputees’ lives and myoelectric prostheses can help them to improve their life quality. Several electrodes are used to record myoelectric signals. Then, the motion intentions can be recognized by pattern recognition algorithms [8]. In the past decades, sEMG-based hand motion intention recognition has primarily been researched as a classification task [9,10,11]. Many limitations exist in real circumstances that hinder the classification algorithm research of myoelectric prostheses, for example, few amputated volunteers can be recruited in the experiments resulting in limited data. In addition, the collected signals can further suffer from signal quality deterioration due to muscular atrophy and noise pollution, etc. These problems can lead to the inaccurate recognition of motion intentions and, thus, result in the limited capabilities and adoption of myoelectric prostheses [12].
In the recent years, transfer learning technology has evolved as a novel learning framework for transferring information from one interest domain to another [13], and there have been several applications of transfer learning in myoelectric control. Most of these applications focus on solving cross-subject problems and numerous researchers have been dedicated to constructing more robust recognition algorithms by utilizing historical data from source subjects. Castellini, Fiorilla et al. [14] observed that sEMG signals varied greatly between individuals and that models trained on diverse people are mainly subject specified; however, they demonstrated that a pre-trained model was effective in reducing the amount of time needed for an individual to become proficient with a prosthesis. Matsubara, Hyon et al. [15] split myoelectric signals into two independent parts named the motion-dependent part and the user-dependent part, such that models could be reused by rapidly learning only the user-dependent part for a new user. Sensinger, Lock et al. [16] proposed to concatenate the source and target data and showed various classification algorithms on it. Park, Lee et al. [17] proposed a user-adaptive model based on CNN. They used deep learning to resolve a sEMG-based gesture recognition task for the first time. This model outperformed the support vector machine (SVM) in terms of its classification accuracy and robustness. Cote-Allard, Fall et al. [18] presented an inter-participant transfer learning framework based on a progressive neural network (PNN) to improve cross-subject recognition accuracy, in which each subject was considered as a source task. Fan, Jiang et al. [19] proposed a modified Lenet combined with a pretrain mechanism to improve the recognition performance on amputees. These studies proved the possibility of using transfer learning in the field of myoelectric control, especially on the problem of low recognition accuracy when a model was applied across subjects.
However, the earlier studies were limited in only considering the intact-limb subjects in experiments [20,21,22,23], i.e., the data of the source and target domain were both from intact subjects. Signals from intact people can be ensured in terms of accessibility and quality and may lead to an overoptimistic recognition performance. Inspired by the previous studies on cross-subject problems, namely, that there should be some shared motion-related information between different subjects, the basic hypothesis of this work is that the performance of amputees’ hand motion intention recognition can be improved with the help of information from intact subjects. In this study, a CNN-based transfer learning strategy was proposed to improve the recognition accuracy of amputees, whose signals are weaker, and that have higher noise and are less available in the databases. Additionally, to better exploit the time and space information of the sEMG signal, a long-exposure segmentation was proposed for the data augmentation. The proposed method was developed and evaluated on the Ninapro (Non Invasive Adaptive Prosthetics) DB2 and DB3.

2. Materials and Methods

The general steps of a conventional sEMG-based motion intention recognition task are the preprocessing, windowing, feature extraction, and classification. Figure 1 depicts the whole workflow of the proposed motion intention recognition system. Compared with the general steps, there is an additional step named long-exposure segmentation following the feature extraction, which will be explained in Section 2.3.1.

2.1. Dataset

The Ninapro (Non Invasive Adaptive Prosthetics) database is one of the most extensive public databases for prosthesis movement categorization. In this study, the DB2 and DB3 were used to develop and evaluate the proposed method. Atzori, Gijsberts et al. [24] describes these two datasets in detail. A brief introduction is provided here for clarity. There are 40 intact subjects in DB2 and 11 trans-radial amputation subjects (11 males; age 42.36 ± 11.96 years) in DB3. Table 1 shows the specifics of the amputation information. Each participant was instructed to perform 49 motions (divided into 3 exercises named Exercise A, B, and C) six times following the computer screen during the acquisition process. Each repetition lasted five seconds and was followed by a three-second rest gesture. In this work, 17 movements in Exercise B (Figure 2) were classified. These movements included 8 isometric and isotonic hand configurations and 9 basic movements of the wrist, which are common in daily life and are frequently investigated in similar studies. The sEMG signals were recorded using 12 Delsys double-differential sEMG electrodes fixed on the subjects forearms with a sampling rate of 2 kHz. Eight electrodes were equally spaced around the forearm at the height of the radio-humeral joint; two electrodes were placed on the main activity spots of the flexor digitorum superficialis and of the extensor digitorum superficialis; two electrodes were also placed on the main activity spots of the biceps brachii and of the triceps brachii. Before making the datasets public, several signal processes were completed. The 50 Hz power-line interference (and harmonics) were removed from the sEMG using a Hampel filter (which is a filter used for removing outliers) first. Then, the signal data streams were synchronized with high-resolution timestamps and re-labelled to eliminate the mismatch between the performed movements and the stimuli from the computer screen.

2.2. Feature Extraction

The raw sEMG signals were first transformed into low-dimensional feature vectors with increased information density, i.e., one feature point was calculated from a sliding window of 100 ms (with 200 sampling points) and the information of the whole window was compressed into one feature point. Generally, the features extracted from a time-frequency domain (TFD features) contain more information from both the temporal and frequency domains [25]. In this study, marginal discrete wavelet transform (mDWT) was used to extract the TFD features, for they performed better in the prior study [26]. The mDWT features were calculated as Equation (1), where S represents the deepest decomposition level ( S = l o g 2 N ; N represents the wavelet order) and was set at 3 in this study; and d x ( s , u ) = x ( t ) . In this work, a Daubechies 7 wavelet (N = 7) served as the mother wavelet. The sliding window was 200 ms in length with a 10 ms increment. The comparison of the raw signals and the corresponding mDWT features is presented in Figure 3.
mDWT ,   m x s = u = 0 N 2 S   -   1 d x s , u , s = 1 , , S

2.3. Gesture Recognition Using LECNN

Convolutional neural networks (CNNs) perform excellently in classification tasks across a wide range of disciplines for their unparalleled ability to exploit spatial features. CNNs also have been proved successful in sEMG pattern recognition, with better classification accuracies compared to some traditional machine learning methods, namely, LDA, SVM, KNN, MLP, and RF [27]. Specifically, Geng et al. [28] employed CNN for hand motion recognition on three public databases, and the classification accuracies of the CNN achieved the highest in all the datasets compared with traditional methods. Du et al. [29] employed a similar approach with adaptation to achieve better performances for inter-session and inter-subject scenarios. The experimental results showed that the CNN outperformed traditional methods in a Ninapro dataset with 12 hand gestures; therefore, we proposed a CNN-based model in this study.

2.3.1. Long-Exposure Segmentation

Instead of the traditional segmentation of sEMG signals, which treats an entire analyzing window as a single sample, a long-exposure strategy akin to [30] was used for the sample segmentation. To be more precise, a long-exposure sEMG sample set was created from the raw sEMG that contained 200 subframes of mDWT features (where each subframe mDWT feature was calculated from a 100 ms raw sEMG with a 0.5 ms increment). As Figure 4 shows, the traditional way extracts mDWT features from a whole analyzing window; however, the proposed way splits one analyzing window into several minor processing windows (where each minor processing window contains a 100 ms raw sEMG with a 0.5 ms increment) to calculate the subframes of mDWT features. The final input would be built with 200 subframes, and the final sample size was 200 × 48 (i.e., numbers of sEMG channels × (S + 1) = 48, where S represents the deepest decomposition level mentioned in Section 2.2). Obviously, the long-exposure structure includes more sEMG information in time and space dimensions than traditional segmentation.

2.3.2. Long-Exposure Convolutional Neural Network (LECNN)

The same CNN architecture for both the source and target models is presented in Figure 5. Three convolutional blocks after the input layer served as a feature extractor. The Block1 and Block3 each had a convolutional layer and a batch normalization (BN) layer to eliminate gradient vanishing. The Block2 had an additional pooling layer to augment the features and reduce the computation cost. The convolutional layer in each block had 32 filters of 1 × 3 × 3, 64 filters of 32 × 3 × 3 and 32 filters of 64 × 1 × 1, respectively. After each BN layer, a ReLU layer was adopted as an activation function. After Block3, a fully connected (FC) layer was employed with 17 hidden units as a classifier. Finally, a softmax layer was applied after the FC layer to calculate the probabilities that a sample was classified into each gesture.

2.4. Transfer Learning

Deep transfer learning investigates how to transfer information from one deep neural network to another. The network-based deep transfer learning is one of the most prevalent forms. It seeks to reuse the entire or a portion of the network of the source domain to the target domain. It has been demonstrated that the shallow layers of a deep neural network typically capture more general features [31]; hence, it is reasonable to transfer them to a target model.
In our study, the proposed model was first pretrained on the source subjects. We supposed that general knowledge of the gesture recognition could be learned from a vast amount of data. The source model was then partially transferred to the target model. Specifically, the parameters of the convolutional blocks were transferred from the source model to the target model, while the parameters of the FC layer were initialized randomly. Then, the parameters of Block1 were frozen during the following training to extract the general features. The parameters of Block2 and Block3 were finetuned to eliminate the difference between the two domains. The whole transfer learning framework is depicted in Figure 6.

2.5. Experiments and Data Analysis

In this work, sEMG signals recorded from 20 intact-limb subjects with high SNR made up the source domain, as the source model performance may influence the transfer learning. The target model was established for each amputated subject independently to explore the transfer learning efficacy. All the sEMG signals were segmented into samples of 200 × 12 , as described in Section 2.3.1. Samples from repetitions 2 and 5 were used for testing, while repetitions 1, 3, 4, and 6 were used for training.
The source LECNN model was first trained on 20 intact subjects to learn general knowledge of the gesture recognition. Then, eleven models (for there were eleven amputees) were trained using the target training data (i.e., repetitions 1, 3, 4, and 6 of each amputee) without transfer learning, which were then used as the referential models. Finally, the eleven models with the proposed transfer learning strategy were trained using the same target data to improve the recognition performance. Consequently, there were 23 models established in our work: ( i ) a source model, (ii) eleven subject-specific models without transfer learning (named LECNN-Onlys), and (iii) eleven target CNN models combined with transfer learning (named LECNN-TLs).
The model performance was evaluated based on the classification accuracy and weighted F1-score (Equation (2). A one-tailed Wilcoxon signed rank test was used to compare the recognition performance of the LECNN-Onlys and LECNN-TLs. Considering that higher average accuracies after transfer could be the result if the strategy worked well on some gestures but not others, the accuracies of each gesture were analyzed. Additionally, since clinical amputation parameters could significantly impact the recognition performance, the correlations of the classification accuracy and two clinical parameters (i.e., the remaining limb percentage and the years since amputation) were investigated using a Pearson’s correlation analysis:
F 1 - score = 1 l L y ^ l l L y ^ l F 1 y l , y ^ l
F 1 y l , y ^ l is defined as Equation (3):
F 1 y l , y ^ l = 2 P y l , y ^ l × R y l , y ^ l P y l , y ^ l + R y l , y ^ l
P y l , y ^ l , R y l , y ^ l are the Precision and Recall defined as Equations (4) and (5):
P y l , y ^ l = y l y ^ l y l
R y l , y ^ l = y l y ^ l y ^ l
where L is the set of labels, y l is the true label and y ^ l is the predicted label.

3. Results

3.1. Classification Accuracy on Intact-Limb Subjects

As described above, the source model was trained using signals from 20 intact-limb subjects. We obtained a 91.6 % ± 3.4 % classification accuracy and 93.4 % ± 3.2 % weighted F1-score. The classification accuracy was improved by 6.1% (91.6% vs. 85.5%) compared to a similar experiment [19] with the same features, gestures, and subjects.

3.2. Classification Accuracy on Intact-Limb Subjects

The average classification accuracy and weighted F1-score of the LECNN-TLs were significantly higher than LECNN-Onlys ( 78.1 ± 5.6 % and 79.6 % ± 7.2 % vs. 73.5 % ± 5.2 % and 76.2 % ± 6.7 % , respectively, p-value = 0.0005 < 0.05, and Cohen’s d = 0.24 > 0.2). Figure 7 shows the comparison of each LECNN-Only and LECNN-TL model pair of the classification accuracy and weighted F1-score. As shown in Figure 7, the classification accuracies after transfer were all higher than their non-transfer counterparts, and the improvement on s3, s7, and s9 exceeded 5%.

3.3. Classification Accuracies on Different Gestures

In Figure 7, the classification accuracies of the different subjects were covered up; however, the improvement in the averaged classification accuracy did not equal the improvement on each gesture. To investigate the effectiveness of transfer learning for each gesture, the average classification accuracy of each gesture across the 11 amputees was calculated. As Figure 8 shows, the blue line (LECNN-TLs) showed higher accuracies than the orange line (LECNN-Onlys) across all 17 gestures. In other words, no negative transfer (i.e., the performance of the models declined after applying the transfer learning) occurred during the process of the transfer learning, which also proved the effectiveness of our proposed transfer learning strategy.

4. Discussion

4.1. The Classification Accuracy Comparison with Other Typical Methods

The average classification accuracy of the amputees after transfer learning achieved 78.1% on the 17-class motion intention recognition task. Compared with the other two similar studies in Table 2, both of which tended to transfer knowledge from intact-limb to amputated-limb subjects, the proposed method had a better recognition performance.
Table 3 compares the recognition performance of our method with the state-of-the art methods assessed on amputees (DB3). Atzori, Cognolato et al. [32] classified 50 movements on 11 amputees to establish a baseline. In their study, the CNN’s average classification accuracy was under 40%, and the SVM classifier had the best accuracy at 42.7%. Arunraj, Srinivasan et al. [26] employed different features (e.g., WL, MAV, auto-regressive co-efficient (ARC), and logarithmic variance (LV)) in a REM (random Fourier mapping) classifier and obtained an average accuracy of 53.3% for 50 movements on 11 amputees. These two studies both classified 50 movements, which is more than twice the number of classes considered in our study (i.e., 17 movements); thus, the accuracy gap between theirs and ours may not be so large, and we have listed them only to present a baseline. Cene and Balbinot [33] enhanced their categorization accuracy further. Using the extreme learning machines (ELM) technique, their study achieved a mean accuracy of 67.0% for 17 movements. Inspired by [32], Fan, Jiang et al. [19] utilized a deep learning model similar to Le-net and demonstrated a greater average accuracy of 67.5%. In our experiments, the results showed a superior performance with an average accuracy of 78.1%. Note that Cene and Balbinot [34], Fan, Jiang et al. [19], Tosin, Cene et al. [35], and Zhai, Jelfs et al. [36] removed Subject 7 from their work, as his forearm was entirely lost, as is shown in Table 1. This severe condition can result in an exceptionally low rate of correct recognition. Consequently, a new set of models were trained which excluded Subject 7, as the above references did, for comparison. The proposed method then attained an average accuracy of 83.5%, which surpasses the state-of-the-art methods.
Table 2. Recognition performance with the other two transfer strategies.
Table 2. Recognition performance with the other two transfer strategies.
Gregori, Gijsberts et al. [37]Fan, Jiang et al. [19]Proposed
Motions171717
Int./Amp.20/920/1120/11
FeaturesAvg. MAV/VAR/WLmDWTmDWT
ClassifierSVMCNNLECNN
Non-transfer52.1%62.0%73.4%
Transfer51.9%67.5%78.1%
Improvement−0.02%5.5%4.7%
Table 3. Comparison of classification performance to the state-of-the-art methods on amputees.
Table 3. Comparison of classification performance to the state-of-the-art methods on amputees.
GesturesAmputeesFeaturesModelAccuracy
Atzori, Cognolato et al. [32]5011RMSSVM42.7%
Arunraj, Srinivasan et al. [26]5011LV/ARC/WL/MAVRFM53.3%
Cene and Balbinot [33]1710RMSCNN56.9%
Cene and Balbinot [34]1711Avg. RMS/MAV/SDELM67.0%
Tosin, Cene et al. [35]1710Feature SelectionSVM-REF74.8%
Zhai, Jelfs et al. [36]1010SpectrogramCNN73.3%
Fan, Jiang et al. [19]1711mDWTCNN67.5%
Fan, Jiang et al. [19]1710mDWTCNN82.3%
Proposed1711mDWTLECNN78.1%
Proposed1710mDWTLECNN83.5%

4.2. Correlation Analysis of Recognition Performance and Amputation Factors

Many previous studies have proved that sEMG signals vary between different subjects, and this can be more obviously observed across amputees. Consequently, motivated by the distribution of the classification accuracies shown in Figure 7, we explored the correlations between classification accuracy and two amputation factors (i.e., the remaining forearm percentage (RFP) and years since amputation (YSA)) using a Pearson correlation analysis. As shown in Figure 9, there was a positive linear correlation between the classification accuracy and the RFP with the Pearson correlation coefficient r = 0.63. Atzori, Gijsberts et al. [8] found a similar result in their study as well. This result reveals the challenge of motion intention recognition for a highly-amputated group. The classification accuracy (only 17.9%) was particularly poor when the entire forearm was lost. Although it was improved to 24.9% after transfer learning, it was still too low for accurate recognition. Due to nerve injury, the residual muscles of amputees may generate weaker sEMG signals than those of intact people [38]. Therefore, when the entire forearm was absent, the information from the subjects with intact forearms may have been meaningless; however, no significant linear correlation was found between the classification accuracy and the YSA. This result is inconsistent with the results from [8]. One probable explanation is that they classified a larger number of motions (i.e., 50 classes), which reduced the classification accuracy of the freshly-amputated subjects. In our study, the proposed method showed a more stable recognition performance. Furthermore, considering that s7 is a strong outlier, this would affect the analysis result; thus, we performed the correlation analysis excluding s7. The results showed no significant linear correlation between the classification accuracy and the YSA, which is consistent with the original result. While the original linear correlation between the classification accuracy and RFP did not exist after excluding s7; this was an expected result for us since it showed a more stable recognition performance of our method when studying amputees with different RFPs.

4.3. Computational Cost

Although online performance is more important, many studies have used offline experiments at their beginnings to show the effectiveness of their methods. Thus, we performed off-line experiments in this work at first as well; however, considering the importance of online performance, we further evaluated the computational cost of our method as an extension of this study. The computational cost was evaluated on a server with 40-thread Intel(R) Xeon(R) Silver 4316 CPU and NVIDIA Geforce RTX 3090GPU, and a Raspberry Pi 4B with 4G of storage. The inference time of one sample was 570 μs on the sever and 49 ms on the Raspberry Pi, which were all less than 200 ms (which is an acceptable time delay in real use). This indicated the proposed method had the potential to be used in the real world.

5. Conclusions

In this study, a deep transfer learning strategy based on a LECNN model was proposed to improve the recognition accuracy of hand motion intention for trans-radial amputees. The performance of target models was improved after applying the proposed transfer learning strategy. This result demonstrates an encouraging way to enhance the motion intention recognition abilities of amputees by utilizing information from intact people. Additionally, by creating models specifically for each amputee and examining the relationship between two amputation factors and the classification accuracy, we confirmed that a low RFP brings a challenge to the motion intention recognition task of amputees. This inspires us to develop more robust algorithms for use with a high amputation level when recognizing the motion intention for amputees.
In this study, only the same hand gestures were considered in both the intact and amputated subjects, and when a new gesture appeared, the performance of our system could not be guaranteed; therefore, in the following work, we could focus on improving the robustness of our system when new gestures appear. This can be accomplished by examining which non-stationary characteristic of a sEMG signal is changing between the present and new gestures to develop a system with a high robustness.

Author Contributions

Methodology, C.L.; software, X.N. and J.Z.; validation, X.N. and J.Z.; writing—original draft preparation, X.N.; writing—review and editing, X.F.; project administration, X.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Leading talent project of Dalian Maritime University (00253020).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data is unavailable due to privacy.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mills, K.R. The Basics of Electromyography. J. Neurol. Neurosurg. Psychiatry 2005, 76 (Suppl. S2), 32–35. [Google Scholar]
  2. Yang, G.; Li, N. Design of the Human Surface Electromyogra Signal Acquisition System and Signal Analysis. In Proceedings of the Seventh Asia International Symposium on Mechatronics; Duan, B., Umeda, K., Hwang, W., Eds.; Springer: Singapore, 2020; pp. 915–926. [Google Scholar]
  3. Liu, C.; Li, J.; Zhang, S.; Yang, H.; Guo, K. Study on Flexible sEMG Acquisition System and Its Application in Muscle Strength Evaluation and Hand Rehabilitation. Micromachines 2022, 13, 2047. [Google Scholar] [PubMed]
  4. Sung, J.H.; Baek, S.-H.; Park, J.-W.; Rho, J.H.; Kim, B.-J. Surface Electromyography-Driven Parameters for Representing Muscle Mass and Strength. Sensors 2023, 23, 5490. [Google Scholar] [PubMed]
  5. Ginszt, M.; Zieliński, G. Novel Functional Indices of Masticatory Muscle Activity. J. Clin. Med. 2021, 10, 1440. [Google Scholar] [PubMed]
  6. Li, W.; Shi, P.; Yu, H. Gesture Recognition Using Surface Electromyography and Deep Learning for Prostheses Hand: State-of-the-Art, Challenges, and Future. Front. Neurosci. 2021, 15, 621885. [Google Scholar] [PubMed]
  7. Copaci, D.; Arias, J.; Gómez-Tomé, M.; Moreno, L.; Blanco, D. SEMG-Based Gesture Classifier for a Rehabilitation Glove. Front. Neurorobot. 2022, 16, 750482. [Google Scholar]
  8. Atzori, M.; Gijsberts, A.; Castellini, C.; Caputo, B.; Hager, A.-G.M.; Elsig, S.; Giatsidis, G.; Bassetto, F.; Muller, H. Effect of clinical parameters on the control of myoelectric robotic prosthetic hands. J. Rehabil. Res. Dev. 2016, 53, 345–358. [Google Scholar]
  9. Schultz, A.E.; Kuiken, T.A. Neural Interfaces for Control of Upper Limb Prostheses: The State of the Art and Future Possibilities. Phys. Med. Rehabil. 2011, 3, 55–67. [Google Scholar]
  10. Oskoei, M.A.; Hu, H. Myoelectric Control Systems—A Survey. Biomed. Signal Process. Control. 2007, 2, 275–294. [Google Scholar]
  11. Li, X.; Chen, S.; Zhang, H.; Samuel, O.W.; Wang, H.; Fang, P.; Zhang, X.; Li, G. Towards Reducing the Impacts of Unwanted Movements on Identification of Motion Intentions. J. Electromyogr. Kinesiol. 2016, 28, 90–98. [Google Scholar]
  12. Al-Timemy, A.H.; Khushaba, R.N.; Bugmann, G.; Escudero, J. Improving the Performance Against Force Variation of EMG Controlled Multifunctional Upper-Limb Prostheses for Transradial Amputees. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 24, 650–661. [Google Scholar] [PubMed]
  13. Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar]
  14. Castellini, C.; Fiorilla, A.E.; Sandini, G. Multi-Subject/Daily-Life Activity EMG-Based Control of Mechanical Hands. J. Neuroeng. Rehabil. 2009, 6, 41. [Google Scholar] [PubMed]
  15. Matsubara, T.; Hyon, S.-H.; Morimoto, J. Learning and Adaptation of a Stylistic Myoelectric Interface: EMG-Based Robotic Control with Individual User Differences. In Proceedings of the 2011 IEEE International Conference on Robotics and Biomimetics, Karon Beach, Thailand, 7–11 December 2011; pp. 390–395. [Google Scholar]
  16. Sensinger, J.W.; Lock, B.A.; Kuiken, T.A. Adaptive Pattern Recognition of Myoelectric Signals: Exploration of Conceptual Framework and Practical Algorithms. IEEE Trans. Neural Syst. Rehabil. Eng. 2009, 17, 270–278. [Google Scholar]
  17. Park, K.-H.; Lee, S.-W. Movement Intention Decoding Based on Deep Learning for Multiuser Myoelectric Interfaces. In Proceedings of the 2016 4th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Republic of Korea, 22–24 February 2016; pp. 1–2. [Google Scholar]
  18. Côté-Allard, U.; Fall, C.L.; Drouin, A.; Campeau-Lecours, A.; Gosselin, C.; Glette, K.; Laviolette, F.; Gosselin, B. Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 760–771. [Google Scholar]
  19. Fan, J.; Jiang, M.; Lin, C.; Li, G.; Fiaidhi, J.; Ma, C.; Wu, W. Improving SEMG-Based Motion Intention Recognition for Upper-Limb Amputees Using Transfer Learning. Neural Comput. Appl. 2023, 35, 16101–16111. [Google Scholar]
  20. Chen, X.; Li, Y.; Hu, R.; Zhang, X.; Chen, X. Hand Gesture Recognition Based on Surface Electromyography Using Convolutional Neural Network with Transfer Learning Method. IEEE J. Biomed. Health Inform. 2021, 25, 1292–1304. [Google Scholar]
  21. Yu, Z.; Zhao, J.; Wang, Y.; He, L.; Wang, S. Surface EMG-Based Instantaneous Hand Gesture Recognition Using Convolutional Neural Network with the Transfer Learning Method. Sensors 2021, 21, 2540. [Google Scholar]
  22. Soroushmojdehi, R.; Javadzadeh, S.; Pedrocchi, A.; Gandolla, M. Transfer Learning in Hand Movement Intention Detection Based on Surface Electromyography Signals. Front. Neurosci. 2022, 16, 977328. [Google Scholar]
  23. Ozdemir, M.; Kisa, D.; Guren, O.; Akan, A. Hand Gesture Classification Using Time-Frequency Images and Transfer Learning Based on CNN. Biomed. Signal Process. Control. 2022, 77, 103787. [Google Scholar]
  24. Atzori, M.; Gijsberts, A.; Castellini, C.; Caputo, B.; Hager, A.-G.M.; Elsig, S.; Giatsidis, G.; Bassetto, F.; Müller, H. Electromyography Data for Non-Invasive Naturally-Controlled Robotic Hand Prostheses. Sci. Data 2014, 1, 140053. [Google Scholar] [PubMed]
  25. Burhan, N.; Kasno, M.; Ghazali, R. Feature Extraction of Surface Electromyography (SEMG) and Signal Processing Technique in Wavelet Transform: A Review. In Proceedings of the 2016 IEEE International Conference on Automatic Control and Intelligent Systems (I2CACIS), Selangor, Malaysia, 22–22 October 2016; pp. 141–146. [Google Scholar]
  26. Arunraj, M.; Srinivasan, A.; Arjunan, S. A Real-Time Capable Linear Time Classifier Scheme for Anticipated Hand Movements Recognition from Amputee Subjects Using Surface EMG Signals. IRBM 2021, 42, 277–293. [Google Scholar]
  27. Phinyomark, A.; Scheme, E. EMG Pattern Recognition in the Era of Big Data and Deep Learning. Big Data Cogn. Comput. 2018, 2, 21. [Google Scholar]
  28. Geng, W.; Du, Y.; Jin, W.; Wei, W.; Hu, Y.; Li, J. Gesture Recognition by Instantaneous Surface EMG Images. Sci. Rep. 2016, 6, 36571. [Google Scholar] [CrossRef] [PubMed]
  29. Du, Y.; Jin, W.; Wei, W.; Hu, Y.; Geng, W. Surface EMG-Based Inter-Session Gesture Recognition Enhanced by Deep Domain Adaptation. Sensors 2017, 17, 458. [Google Scholar] [PubMed]
  30. Guo, W.; Ma, C.; Wang, Z.; Zhang, H.; Farina, D.; Jiang, N.; Lin, C. Long Exposure Convolutional Memory Network for Accurate Estimation of Finger Kinematics from Surface Electromyographic Signals. J. Neural Eng. 2021, 18, 026027. [Google Scholar]
  31. Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How Transferable Are Features in Deep Neural Networks? In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2014; p. 27. [Google Scholar]
  32. Atzori, M.; Cognolato, M.; Müller, H. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands. Front. Neurorobot. 2016, 10, 9. [Google Scholar]
  33. Cene, V.H.; Balbinot, A. Resilient EMG Classification to Enable Reliable Upper-Limb Movement Intent Detection. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 2507–2514. [Google Scholar]
  34. Cene, V.H.; Balbinot, A. Enhancing the Classification of Hand Movements through SEMG Signal and Non-Iterative Methods. Health Technol. 2019, 9, 561–577. [Google Scholar]
  35. Tosin, M.C.; Cene, V.H.; Balbinot, A. Statistical Feature and Channel Selection for Upper Limb Classification Using SEMG Signal Processing. Res. Biomed. Eng. 2020, 36, 411–427. [Google Scholar]
  36. Zhai, X.; Jelfs, B.; Chan, R.H.M.; Tin, C. Self-Recalibrating Surface EMG Pattern Recognition for Neuroprosthesis Control Based on Convolutional Neural Network. Front. Neurosci. 2017, 11, 379. [Google Scholar] [PubMed]
  37. Gregori, V.; Gijsberts, A.; Caputo, B. Adaptive Learning to Speed-up Control of Prosthetic Hands: A Few Things Everybody Should Know. In Proceedings of the 2017 International Conference on Rehabilitation Robotics (ICORR), London, UK, 17–20 July 2017; pp. 1130–1135. [Google Scholar]
  38. Wang, H.; Fang, P.; Tian, L.; Zheng, Y.; Zhou, H.; Li, G.; Zhang, X. Towards Determining the Afferent Sites of Perception Feedback on Residual Arms of Amputees with Transcutaneous Electrical Stimulation. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; Volume 2015, pp. 3367–3370. [Google Scholar]
Figure 1. The whole workflow of the motion intention recognition system. Raw signals were first separated using a 200 ms sliding window with a 10 ms increment to extract the mDWT features. The mDWT features were then divided into samples in size of 200 × 48 (where the sample size is explained in Section 2.3.1) with a 20 overlap for classification.
Figure 1. The whole workflow of the motion intention recognition system. Raw signals were first separated using a 200 ms sliding window with a 10 ms increment to extract the mDWT features. The mDWT features were then divided into samples in size of 200 × 48 (where the sample size is explained in Section 2.3.1) with a 20 overlap for classification.
Applsci 13 11071 g001
Figure 2. The 17 classified gestures in this work. Gesture 1 is represented as m1, gesture 2 is represented as m2, and so on.
Figure 2. The 17 classified gestures in this work. Gesture 1 is represented as m1, gesture 2 is represented as m2, and so on.
Applsci 13 11071 g002
Figure 3. Twelvechannel raw sEMG signals from an amputee vs. the corresponding mDWT features.
Figure 3. Twelvechannel raw sEMG signals from an amputee vs. the corresponding mDWT features.
Applsci 13 11071 g003
Figure 4. Long-exposure segmentation to build inputs which contain more detailed information on time and space. One minor processing window contains 12 channels of 100 ms sEMG with an increment of 0.5 ms. Each minor processing window is used to calculate one subframe of the mDWT features. The final input contains 200 subframes of mDWT features.
Figure 4. Long-exposure segmentation to build inputs which contain more detailed information on time and space. One minor processing window contains 12 channels of 100 ms sEMG with an increment of 0.5 ms. Each minor processing window is used to calculate one subframe of the mDWT features. The final input contains 200 subframes of mDWT features.
Applsci 13 11071 g004
Figure 5. The proposed LECNN structure. All three convolutional layers contained a convolutional layer and a BN layer, while an additional pooling layer was added in Block2. A ReLU layer was adopted as an activation function after each BN layer. The sizes of filters in the three convolutional layers were 3 × 3 , 3 × 3 , and 1 × 1 , respectively. The pooling operation was completed using 2 × 2 filters.
Figure 5. The proposed LECNN structure. All three convolutional layers contained a convolutional layer and a BN layer, while an additional pooling layer was added in Block2. A ReLU layer was adopted as an activation function after each BN layer. The sizes of filters in the three convolutional layers were 3 × 3 , 3 × 3 , and 1 × 1 , respectively. The pooling operation was completed using 2 × 2 filters.
Applsci 13 11071 g005
Figure 6. The process of transfer learning. Parameters of three convolutional blocks in the pretrained source model were first transferred to the target model, serving as the initial value of the parameters. The parameters of Block1 were then fixed without updating, while Block2, Block3, and the fully connected layer were finetuned using the target data.
Figure 6. The process of transfer learning. Parameters of three convolutional blocks in the pretrained source model were first transferred to the target model, serving as the initial value of the parameters. The parameters of Block1 were then fixed without updating, while Block2, Block3, and the fully connected layer were finetuned using the target data.
Applsci 13 11071 g006
Figure 7. Recognition performance comparison between LECNN-TLs and LECNN-Onlys. The (left) is the classification accuracy, and the (right) is the weighted F1-score. The results of each amputee are presented to show the effectiveness of transfer learning on each individual.
Figure 7. Recognition performance comparison between LECNN-TLs and LECNN-Onlys. The (left) is the classification accuracy, and the (right) is the weighted F1-score. The results of each amputee are presented to show the effectiveness of transfer learning on each individual.
Applsci 13 11071 g007
Figure 8. Classification accuracies comparison of each gesture. The fact that the orange line lies higher than the blue one proves the improvement in accuracy on each gesture after using the proposed transfer learning strategy.
Figure 8. Classification accuracies comparison of each gesture. The fact that the orange line lies higher than the blue one proves the improvement in accuracy on each gesture after using the proposed transfer learning strategy.
Applsci 13 11071 g008
Figure 9. Correlations of classification accuracy and two clinical amputation parameters. (a) is the correlation between classification accuracy and Remaining forearm percentage. (b) is the correlation between classification accuracy and Years since amputation.
Figure 9. Correlations of classification accuracy and two clinical amputation parameters. (a) is the correlation between classification accuracy and Remaining forearm percentage. (b) is the correlation between classification accuracy and Years since amputation.
Applsci 13 11071 g009
Table 1. Amputation information of 11 amputees in DB3.
Table 1. Amputation information of 11 amputees in DB3.
SubjectHandednessAmputated Hand(s)Remaining Forearm (%)Year Since AmputationProsthesis Use
1RightRight5013Myoelectric
2RightLeft706Cosmetic
3RightRight305Myoelectric
4RightRight and Left401No
5LeftLeft901Kinematic
6RightLeft4013Kinematic
7RightRight07No
8RightRight505Myoelectric
9RightRight9014Myoelectric
10RightRight502Myoelectric
11RightRight905Myoelectric
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, C.; Niu, X.; Zhang, J.; Fu, X. Improving Motion Intention Recognition for Trans-Radial Amputees Based on sEMG and Transfer Learning. Appl. Sci. 2023, 13, 11071. https://doi.org/10.3390/app131911071

AMA Style

Lin C, Niu X, Zhang J, Fu X. Improving Motion Intention Recognition for Trans-Radial Amputees Based on sEMG and Transfer Learning. Applied Sciences. 2023; 13(19):11071. https://doi.org/10.3390/app131911071

Chicago/Turabian Style

Lin, Chuang, Xinyue Niu, Jun Zhang, and Xianping Fu. 2023. "Improving Motion Intention Recognition for Trans-Radial Amputees Based on sEMG and Transfer Learning" Applied Sciences 13, no. 19: 11071. https://doi.org/10.3390/app131911071

APA Style

Lin, C., Niu, X., Zhang, J., & Fu, X. (2023). Improving Motion Intention Recognition for Trans-Radial Amputees Based on sEMG and Transfer Learning. Applied Sciences, 13(19), 11071. https://doi.org/10.3390/app131911071

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop