Next Article in Journal
TP53-Mutated Circulating Tumor DNA for Disease Monitoring in Lymphoma Patients after CAR T Cell Therapy
Previous Article in Journal
Magnetic Resonance Spectroscopy of Hepatic Fat from Fundamental to Clinical Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Force-Invariant Improved Feature Extraction Method for Upper-Limb Prostheses of Transradial Amputees

by
Md. Johirul Islam
1,2,
Shamim Ahmad
3,
Fahmida Haque
4,
Mamun Bin Ibne Reaz
4,
Mohammad Arif Sobhan Bhuiyan
5,* and
Md. Rezaul Islam
1
1
Department of Electrical and Electronic Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh
2
Department of Physics, Rajshahi University of Engineering and Technology, Rajshahi 6204, Bangladesh
3
Department of Computer Science and Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh
4
Department of Electrical, Electronic and Systems Engineering, Universiti Kebangsaan Malaysia—UKM, Bangi 43600, Malaysia
5
Department of Electrical and Electronics Engineering, Xiamen University Malaysia, Bandar Sunsuria, Sepang 43900, Malaysia
*
Author to whom correspondence should be addressed.
Diagnostics 2021, 11(5), 843; https://doi.org/10.3390/diagnostics11050843
Submission received: 27 March 2021 / Revised: 28 April 2021 / Accepted: 5 May 2021 / Published: 7 May 2021
(This article belongs to the Section Pathology and Molecular Diagnostics)

Abstract

:
A force-invariant feature extraction method derives identical information for all force levels. However, the physiology of muscles makes it hard to extract this unique information. In this context, we propose an improved force-invariant feature extraction method based on nonlinear transformation of the power spectral moments, changes in amplitude, and the signal amplitude along with spatial correlation coefficients between channels. Nonlinear transformation balances the forces and increases the margin among the gestures. Additionally, the correlation coefficient between channels evaluates the amount of spatial correlation; however, it does not evaluate the strength of the electromyogram signal. To evaluate the robustness of the proposed method, we use the electromyogram dataset containing nine transradial amputees. In this study, the performance is evaluated using three classifiers with six existing feature extraction methods. The proposed feature extraction method yields a higher pattern recognition performance, and significant improvements in accuracy, sensitivity, specificity, precision, and F1 score are found. In addition, the proposed method requires comparatively less computational time and memory, which makes it more robust than other well-known feature extraction methods.

1. Introduction

Electromyography (EMG) measures the electrical activity of muscles, which possess information related to their movement [1,2]. Generally, two techniques are widely used for EMG signal acquisition: surface EMG and needle EMG [3]. Recently proposed noninvasive and contactless capacitive EMG is also very promising for the acquisition of EMG signals [4,5,6,7]. However, a feature extraction method evaluates the information indicating a unique movement. Consequently, EMG signals are widely used as a control strategy in myoelectric pattern recognition [8]. However, myoelectric prosthetic hand users are not satisfied with the performance and the degree of freedom of available prosthetic hand [9]. The performance of myoelectric pattern recognition is highly influenced by wrist orientation [10,11], arm positions [12,13], electrode shift [14,15,16], non-stationarity characteristics of the signal [17], mobility of subject [18], and muscle force variation [18,19,20,21]. Among these crucial parameters, force variation is one of the vital physiological behaviors of skeletal muscle, which plays a key role in varying the amplitude and frequency characteristics of the EMG signal [22,23]. Therefore, researchers tried to resolve the force variation problem in myoelectric pattern recognition.
Tkach et al. [24] studied the stability of EMG pattern recognition performance of eleven time-domain features with low and high force levels using linear discriminant analysis (LDA). They observed that the individual pattern recognition performance of each of the time-domain features degraded when the testing force level was not used in the training phase. In addition, they observed that the autoregression coefficient (AR) feature showed better performance with the variation in muscle force. The AR along with the root mean square (RMS) feature were reported by Huang et al. too [25].
Scheme et al. [20] investigated the problems associated with force variation on the EMG pattern recognition performance. In that study, they involved intact-limb subjects with ten hand movements. They collected EMG data for a wide range of force variation, i.e., ranging from 20% to 80% of the maximum voluntary contraction (MVC) with a step size of 10%. They observed a high error rate ranging from 32% to 45% with the LDA classifier; in that study, the LDA was trained with a single force level and was tested with all force levels. In their training scheme, a 50% training force level achieved the lowest error rate. However, the classifier improved its performance with an error rate of 16% when the classifier was trained with all force levels.
Al-Timemy et al. [19] proposed the time-dependent power spectrum descriptors (TDPSD) feature extraction method; it was based on an orientation between a set of spectral moments and a nonlinear map of the original EMG signal. In that study, they involved nine amputees to collect EMG data associated with three force levels; each amputee performed six hand gestures. In that study, the TDPSD achieved significant improvements from ≈6% to 8% in the averaged values of classification performance in comparison with that of well-known four feature extraction methods when the LDA classifier was trained with all force levels. Furthermore, Khushaba et al. [26] proposed the temporal-spatial descriptors (TSD), where they evaluated seven temporal features from a window and spatial correlation between channels, i.e., Cx-Cy. They evaluated the performance on five datasets, where amputees were involved in three datasets. TSD achieved a significant improvement: at least 8% in the averaged value of classification performances for all subjects.
Most of the authors proposed their feature extraction methods to improve force-invariant EMG pattern recognition performance, and they utilized multiple force levels for training purposes to achieve performance at a satisfactory level. However, an ideal force-invariant feature extraction method is such that a single force level is used for the training purpose but is capable of recognizing the gestures at the force level used in training and the gestures at other force levels [27]. Moreover, less time for feature extraction and smaller memory sizes are highly desired, so that the system is implementable in a microcontroller [28,29,30,31].
He et al. [27] proposed a feature extraction method based on discrete Fourier transform and muscle coordination. In this study, they involved intact-limb subjects with a specific location for electrode placement. The subjects performed eight gestures associated with three force levels, where low, medium, and high force levels were defined as 20%, 50%, and 80% of the MVC, respectively. Their proposed method achieved an improvement of 11% in an average performance in comparison with those of time-domain features. In addition, they achieved 91% force-invariant EMG pattern recognition performance when a medium force level was used for training purpose. However, the major constraint of this work is that it requires a specific electrode position on the forearm, which is quite hard to ensure for all amputees. A short overview of the different feature extraction methods is shown in Table 1.
In this context, we attempt to improve the force-invariant EMG pattern recognition performance of transradial amputees. It is more challenging than that for intact-limb subjects since the muscle structure of the amputee is not perfect as for intact-limb subject [35,36]. In this study, we propose an improved force-invariant feature extraction method. It is the extension of the pilot work of Khushaba et al. [26], where the authors used higher-order moments as a feature [13,19,26]; however, they did not use frequency information of the corresponding higher-order moments. However, Hudgin et al. [34] suggested that frequency information along with EMG signal strength obtain better performances. Therefore, to determine the higher-order spectral moments along with frequency information, we employ the time derivative of the signal [26]. Moreover, all considered features are nonlinearly transformed, which associates the EMG signal with a low force more discriminable than that of the high force level. Thus, this transformation balances the forces associated with different gestures and enhances the separation margin among those gestures. In addition to these nonlinear features, we consider the correlation coefficient (CC) for all channel pairs; it requires less computational time since only a single parameter is calculated instead of calculating all of the features, which are mentioned in [26]. An interesting salient characteristic of the CC is that it determines the correlation between channels placed on the underlying muscle groups except for the amplitude of the EMG signal, which is proportionally varied with respect to the muscle force level. Therefore, it is expected that the CC would perform well in force-invariant EMG pattern recognition performance.
In this study, we use an EMG dataset containing transradial amputees to evaluate force-invariant EMG pattern recognition performance when the proposed feature extraction method is used. In addition, we compare the performance and robustness between the proposed feature extraction method and the existing six well-known feature extraction methods with respect to three different classifiers.
The remainder of this paper is structured as follows. Section 2 describes the proposed feature extraction method, EMG dataset, and EMG pattern recognition method. Section 3 shows the force-invariant EMG pattern recognition performance, where the resulting performances are compared with those of other considered well-known feature extraction methods. Section 4 investigates the reasons behind the obtained improved performance, and Section 5 summarizes the overall experimental results.

2. Materials and Methods

2.1. The Proposed Feature Extraction Method

A discrete EMG signal can be expressed for window size N as x [ i T ] , i = 0 , 1 , 2 , 3 , ..... , N 1 , with a sampling frequency of f S Hz, where T = 1 f S . However, x [ i T ] is also expressed as x [ i ] . Parseval Theorem in Equation (1) states that the sum of the square of a function is identical to the sum of the square of its Fourier transform.
i = 0 N 1 x [ i ] 2 = 1 N k = 0 N 1 X [ k ] X * [ k ] = k = 0 N 1 P [ k ]
where X * [ k ] is the conjugate of X [ k ] and P [ k ] is the corresponding power spectral density with a frequency index of k . The following equation relates the derivative of the time-domain signal to the frequency-domain signal.
F [ Δ n x [ i ] ] = k n X [ k ]
where F is the discrete Fourier transform operator and n is the order of derivative. Therefore, the proposed features using Equations (1) and (2) are as follows:
Zero-order power spectrum (P0): The zero-order power spectrum measures the signal strength in the frequency domain [13,19,26]. According to Equation (1), P 0 can be defined in the following way.
P 0 = i = 0 N 1 x [ i ] 2
Second-, fourth-, and sixth-order power spectra (P2, P4, and P6): Hjorth et al. [37] defined a second-order moment as the power of the signal. Therefore, according to Equation (2), it is defined as follows:
P 2 = k = 0 N 1 k 2 P [ k ] = 1 N k = 0 N 1 [ k X [ k ] ] 2 = i = 0 N 1 [ Δ x [ i ] ] 2
Therefore, the higher-order power spectrums are defined by repeating the process.
P 4 = k = 0 N 1 k 4 P [ k ] = 1 N k = 0 N 1 [ k 2 X [ k ] ] 2 = i = 0 N 1 [ Δ 2 x [ i ] ] 2
P 6 = k = 0 N 1 k 6 P [ k ] = 1 N k = 0 N 1 [ k 3 X [ k ] ] 2 = i = 0 N 1 [ Δ 3 x [ i ] ] 2
The odd-order power spectrums are zero. As a result, only effective even order power spectra P 2 , P 4 , and P 6 are considered.
First- and second-order average amplitude change (AC1 and AC2): Unlike in [34], the average of changes in amplitude denotes indirect frequency information. A higher change in amplitude implies higher frequency and vice versa.
A C 1 = 1 N 1 i = 0 N 1 | Δ x |
A C 2 = 1 N 2 i = 0 N 1 | Δ 2 x |
Mean Value (MV): According to [34], the mean value represents the signal strength that can be defined mathematically,
M V = 1 N i = 0 N 1 | x [ i ] |
EMG pattern recognition performance varies with respect to force variation [19]. In addition, EMG signals, when their amplitude values are small, also suffer from the least separable margin among them. Some of the nonlinear functions, the square root, and logarithm were used, which were described in [13,31]. Besides these, we additionally employed the logarithm (logex) on the seven extracted features to obtain the final features, f 1 , f 2 , f 3 , f 4 , f 5 , f 6 , and f 7 as shown in Figure 1.
Correlation coefficients: The size of a motor unit and its firing rate change muscle force, which in turn play a role in varying the EMG signal’s amplitude and its frequency spectrum [22]. Consequently, the amplitude- and frequency-domain features extracted from that EMG signal also fluctuate. Naturally, these fluctuations of the features highly affect EMG pattern recognition performance [20,27]. However, this problem caused by the force variation can be minimized if the features are made force independent.
The CC statistically determines the strength and direction of a linear relationship between two variables. The most salient feature of CC is that it is independent of origin and the unit of measurement of the two considered variables. In the case of multichannel EMG signal acquisition, the CC between any two channels placed on the underlying muscles varies with respect to the gestures since active muscles are unique for each gesture. Additionally, the active muscles that change the strength of the EMG signal remain unchanged for all forces [27]. Therefore, it is expected that the CC is a force-independent feature. The linear correlation coefficient ρ(x,y) for the channels x and y is given by the following formula.
ρ ( x , y ) = C o v ( x , y ) σ x σ y = i = 0 N 1 ( x i x ¯ ) ( y i y ¯ ) i = 0 N 1 ( x i x ¯ ) 2 i = 0 N 1 ( y i y ) 2
Where x , ¯ y , ¯ and N represent the mean of the channel x , the mean of the channel y , and the number of samples in a channel, respectively. If there exists n number of channels, then the number of channel pair is C 2 n , which is equal to the dimension of the correlation coefficient feature. The whole feature extraction procedure is as follows:
Figure 1. The block diagram of the proposed feature extraction procedure.
Figure 1. The block diagram of the proposed feature extraction procedure.
Diagnostics 11 00843 g001

2.2. Description of EMG Dataset

The EMG dataset of transradial amputees was collected from the dedicated website of the second author [19]. The dataset contains nine transradial amputees, seven traumatic (TR1–TR7), and two congenital (CG1 and CG2) amputees, where each amputee was asked to perform six gestures during the process of data collection. The considered gestures were thumb flexion, index flexion, fine pinch, tripod grip, hook grip (hook or snap), and spherical grip (power). However, it was a very challenging task for transradial amputees to perform an imaginary gesture. Therefore, the amputees employed the support of their intact hand to perform an imaginary gesture. In addition to their intact hand, the amputees also used the LabVIEW (National Instruments, Ostin, TX, USA) software to observe the visual feedback for each channel. During this EMG data collection process, each amputee produced three force levels; those were defined as low, medium, and high. They maintained different force levels while watching real-time EMG signal displayed on the LabVIEW screen. However, each transradial amputee performed five to eight trials with a duration of 8 to 12 s. Thus, the total number of EMG signals collected from an amputee is equal to the product of the number of forces, gestures, and trials. In this EMG signal acquisition process, a custom-build EMG signal acquisition system was employed, where the EMG signal was sampled at 2000 Hz. Additionally, the Ag/AgCl electrode (Tyco healthcare, Germany) was used. In this data collection process, differential signal electrode pairs were placed around the forearm of the amputee and their ground electrode was placed on the elbow joint (Figure 2). In this dataset, the number of EMG signal channel varied (8 to 12) from one amputee to another depending on the remaining stump length; however, the first eight channels are common to all amputees and these electrodes were placed around their forearm only. Therefore, we employed data collected from these eight channels to evaluate the EMG pattern recognition [19]. In addition to the considered electrode position, we employed the first five trials for the evaluation of EMG pattern recognition performance; each trial collected at different times indicated the identical gesture. However, to maintain the 5-fold cross-validation described in Section 2.3, we considered the first five trials.

2.3. EMG Pattern Recognition

In this study, for the performance analysis of EMG pattern recognition, we used the popular software MATLAB® 2017a (Mathworks, Natick, MA, USA). An overlapped rectangular windowing scheme was used with a duration of 150 ms, and adjacent windows were overlapped with a duration of 50 ms [19]. The required average delay between successive prediction was 100+τ ms (τ is the required time for predicting a classifier); therefore, the processing time or average system delay was set within the acceptable limit of the real-time prosthetic hand [38]. Each window with a duration of 150 ms for the EMG signal was preprocessed using cascaded digital filters, where a high pass filter of 20 Hz, a low pass filter of 500 Hz, and a notch filter of 50 Hz were used to remove movement artefact [39], high-frequency noise [28], and power line artefact [40], respectively. In the feature extraction section, the proposed features, f 1 , f 2 , f 3 , f 4 , f 5 , f 6 , f 7 , and C C , were evaluated with a feature dimension of 84 (number of features × number of channels + C 2 n correlation coefficients = 7 × 8 + 28 = 84). Therefore, we compared the proposed feature extraction method against six well-known feature extraction methods associated with three different force levels. These include the following:
TSD [26] describes seven features, the root squared zero-order, second-order, and fourth-order moments; sparseness; irregularity factor; coefficient of variation; and the Teager–Kaiser energy operator. Additionally, these seven features were evaluated from each difference between pairs of channels, i.e., C x C y . Therefore, TSD provides 252 dimensional features (number of features × (number of channels + C 2 n pair of C x C y ) = 7 × (8 + 28) = 252).
TDPSD [19] defines six features that are extracted from the time-domain EMG signal. TDPSD features include the root squared zero-order, second-order, and fourth-order moments; the sparseness; the irregularity factor; and waveform length ratio. Hence, TDPSD provides a 48-dimensional feature space.
Wavelet features [32] includes the energy, variance, standard deviation, waveform length, and entropy computed from five levels of decomposition of the coefficients using the Symmlet-8 wavelet family. The wavelet feature dimension is 240 (number of features × (decomposition level + original) × number of channels = 5 × (5 + 1) × 8 = 240).
Du et al. [33] used the six time-domain features (TDF), which were the integral of EMG, waveform length, variance, zero-crossing, slope sign change, and the Wilson amplitude. Therefore, the dimension of the TDF is 48.
Huang et al. [25] used seven features, which were the six order of AR along with the RMS value (AR-RMS). It created a 56-dimensional feature space.
Hudgin et al. [34] defined five features, with four of them (TD) being very popular for myoelectric pattern recognition: the mean absolute value, waveform length, zero-crossing, and slope sign change. Therefore, these four features produce a 32-dimensional feature space.
To reduce the computational time, a higher dimensional feature space was reduced to c 1 ( c is the number of gestures) by using the spectral regression discriminant analysis [41]. In EMG pattern recognition, different classifiers are widely used. These are convolutional neural networks (CNNs) [42,43], artificial neural networks (ANNs) [1,44], linear discriminant analysis (LDAs) [45], support vector machines (SVMs) [46,47], and k-nearest neighbors (KNNs) [48,49]. Among these classifiers, the CNN provides better EMG recognition performance but requires a higher time for learning the model [50]. Therefore, we employed widely used classifiers: the LDA with quadratic function [20,51], the SVM with gaussian radian basis function [46], and the KNN with the number of neighbors equal to three [13]. In this performance evaluation, four trials from the first five were used as training data and the remaining one was used as testing data. Additionally, the process was repeated five times so that each of the trials was used as testing data, which is called 5-fold cross-validation. In this performance evaluation, the performance (F1 score) of each fold is found consistent with respect to other folds, which confirms that the data are not overfitted. However, the number of the training sample is equal to the product of the number of training force levels, training trials, gestures, and the number of samples per trial. Similarly, the number of testing sample is the product of the number of testing force levels, testing trials, gestures, and the samples per trial. In this dataset, the EMG signal duration varies from 8 to 12 s. Hence, the number of training and testing samples also varies slightly from one amputee to another. Finally, the EMG pattern recognition performance was measured in terms of accuracy, sensitivity, specificity, precision, and F1 score [52,53]. These parameters are evaluated as follows:
A c c u r a c y = T P + T N T P + T N + F P + F N
S e n s i t i v i t y = T P T P + F N
S p e c i f i c i t y = T N T N + F P
P r e c i s i o n = T P T P + F P
F 1 S c o r e = 2 × P r e c i s i o n × S e n s i t i v i t y P r e c i s i o n + S e n s i t i v i t y
where TP, TN, FP, and FN represent true positive, true negative, false positive, and false negative value values, respectively.

2.4. EMG Pattern Recognition Performance with Training Strategies of Various Force Level

In daily life, we frequently change muscle forces as required for every movement. Recent studies [19,20,27] illustrated the effect of training strategies with respect to various force levels on EMG pattern recognition performance. Therefore, we study some training and testing schemes for the proposed feature extraction method and considered well-known feature extraction methods
Case 1: Training and testing the classifiers with the same force level.
Case 2: Training the classifiers with a single force level at a time and testing the classifiers with all three force levels.
Case 3: Training the classifiers with any two force levels at a time and testing the classifiers with all three force levels.
Case 4: Training the classifiers with all three force levels and testing the classifiers with all three force levels.

2.5. Statistical Test

To determine the significant difference between the proposed method and other methods, the Bonferroni-corrected Analysis of Variance (ANOVA) test is utilized with a significant level of 0.05. The obtained p-values below 0.05 imply that the performances of the proposed method are significantly different. In this study, the EMG pattern recognition performances of nine amputees for each training case (i.e., Case 1, Case 2, and Case 3) are concatenated to construct a 27-dimensional vector (9 amputees × 3 training schemes for each case), and then, the Bonferroni-corrected ANOVA is performed. Additionally, only ANOVA is performed in Case 4, where the number of training case is one.

2.6. RES Index

To evaluate the clustering performance of a feature or a feature extraction method, the RES (ratio of Euclidean distance to standard deviation) index is employed. The higher RES index specifies a higher separation margin among the classes and vice versa. The RES index can be evaluated as follows [54]:
R E S   I n d e x = E D ¯ σ ¯
where E D ¯ is the Euclidean distance between gesture p and q; it is defined mathematically,
E D ¯ = 2 K ( K 1 ) p = 1 K 1 q = p + 1 K ( m 1 p m 1 q ) 2 + ( m 2 p m 2 q ) 2
where m and K denote the mean value of a feature and the total number of gestures. Dispersion of cluster p and q is given by
σ ¯ = 1 I K i = 1 I k = 1 K S i k
where I is the size of the feature vector.

3. Results

3.1. Signal Observation

To observe the impact of muscle force variation on a gesture, we considered thumb flexion hand gesture. Figure 3a shows the raw EMG signal for three muscle force levels (low, medium, and high) considering a window size of 150 ms. In addition to the raw EMG signal, one feature (f1) was calculated and is shown in the spider plot (Figure 3b). Both figures indicate that the EMG signal strength increases with respect to the increase in muscle force level. Additionally, it is noticed from Figure 3b that, although the strength of the EMG signal increases with respect to the increase in muscle force level, the muscle activation pattern obtained throughout the channels is almost unique for all force levels.

3.2. Impact of Nonlinear Transformation

The impact of the nonlinear transformation (logarithm) on the 2D-feature space is shown in the scatter plot (Figure 4). In this scatter plot, we employed 25 sample points for each gesture from the dataset of amputee 1. Thus, the total number of sample points on the scatter plot is equal to 450 (muscle force levels × gesture × number of sample points for each movement = 3 × 6 × 25). The left (Figure 4a) and right (Figure 4b) scatter plots indicate the original features ( M V , P 0 , P 2 , P 4 , P 6 , A C 1 , A C 2 , and C C ) and nonlinearly transformed features ( f 1 , f 2 , f 3 , f 4 , f 5 , f 6 , f 7 , and C C ). In these figures, each color indicates a gesture. First, the 84-dimensional feature space for each force level is reduced to a 5-dimensional feature space using the SRDA. Thereafter, among this 5-dimensional feature space, the first two were normalized and were used for these scatter plots. The figures show that there is an almost unique muscle activation pattern among the gestures associated with all force levels. Additionally, the logarithm discriminates more for low amplitude values and discriminates less for high amplitude values. Figure 4b shows a higher RES index than Figure 4a, which means that the margin among the gestures is increased. In addition to an improvement in separation margin, the nonlinear transformation also has a more compact cluster among the forces for each gesture.

3.3. The Impact of Window Length on Clustering Performance

To determine the impact of variable window length on clustering performance, we vary the window length from 50 ms to 400 ms with an equal interval of 50 ms. Then, we observed the scatter plot and the RES index simultaneously, which is shown in Figure 5. In this performance evaluation, the scatter plot visualizes the clustering performance and the separation margin among the gestures, and the RES index indicates their quantitative value. However, to evaluate the performance, we employed 25 sample points for each gesture from the dataset of amputee 1. Thus, the total number of sample points on the scatter plot is equal to 450 (muscle force levels × gesture × number of sample points for each movement = 3 × 6 × 25). First, 84-dimensional feature space associated with each force level was reduced to a 5-dimensional feature space using the SRDA. Thereafter, among this 5-dimensional feature space, the first two features (SRDA feature 1 and SRDA feature 2) were normalized and were used in scatter plots. The experimental results shown in Figure 5 indicate that the clustering performance (RES index) decreases with respect to the decrease of window length. It is also observed that there are some fluctuations in performance (RES index) when the window length is higher than 200 ms. The stochastic nature of the EMG signal may be a reason behind this fluctuation of clustering performance.

3.4. Training and Testing the Classifiers with Same Force Level (Case 1)

Training and testing the classifiers with the same force level is a common strategy found in many studies. In this training and testing scheme, the average EMG pattern recognition performances across nine transradial amputees were evaluated by accuracy, sensitivity, specificity, precision, and F1 score, which are shown in Appendix A (Table A1). In addition, the performances are also graphically shown in Figure 6 using the F1 score, since the F1 score is a combined outcome of sensitivity and precision. The experimental results imply that the proposed feature extraction method yields the highest EMG pattern recognition performance in terms of all performance evaluating parameters compared to those of the considered existing feature extraction methods. In this comparison, the recently proposed TSD yields the second-highest EMG pattern recognition performance. The proposed feature extraction method improves the accuracy, sensitivity, specificity, precision, and the F1 score by 0.58, 1.73, 0.32, 1.42, and 1.77, respectively, when the SVM classifier is trained and tested with a medium force level. In addition, the proposed feature extraction method shows a consistency in the performance improvement when the LDA and the KNN are used. Moreover, the significant difference between the proposed feature extraction method and each of the existing feature extraction methods is also confirmed by the Bonferroni-corrected ANOVA. The obtained highest p-value is 1.55 × 10 4 considering each of the performance-evaluating parameters with each classifier, which strongly indicate that the performance achieved by the proposed feature extraction method is significantly different from those of the other methods.

3.5. Training the Classifiers with a Single Force Level at a Time and Testing the Classifiers with All Three Force Levels (Case 2)

In this scheme (Case 2), the classifiers were trained with a single force level and then those were tested with that known force level used in training along with two other unknown force levels. The average performances for all performance evaluating parameters with standard deviation across nine amputees are represented in Appendix A (Table A2). The summary of Table A2 is also graphically shown in Figure 7, where the F1 score was employed only for simplicity. The experimental results show that unknown forces degrade the EMG pattern recognition performance compared to those obtained in Case 1. However, an interesting finding is that the classifiers can predict the unknown force levels as being low and high more effectively when the classifiers are trained with a medium force level. Additionally, in this single force level training scheme, the LDA and SVM classifiers yield almost the same EMG pattern recognition performances, which are slightly better than those obtained from the KNN classifier. However, even in the worst case, the proposed feature extraction method yields the highest EMG pattern recognition performance considering each performance evaluating parameters compared to those of other feature extraction methods. In the best case, when a medium force level training scheme is used, the proposed feature extraction method improves the accuracy, sensitivity, specificity, precision, and the F1 score by 1.12, 3.35, 0.67, 2.86, and 3.30, respectively, when the proposed method is compared with those of the TSD and the SVM classifier. Therefore, the obtained p-values between the proposed method and the other methods considering each classifier are very small, and its values are smaller than 9.27 × 10 4 , which ensures a significant improvement by the proposed feature extraction method.

3.6. Training the Classifiers with Any Two Force Levels at a Time and Testing the Classifiers with All Three Force Levels (Case 3)

The average EMG pattern recognition performances in terms of accuracy, sensitivity, specificity, precision, and F1 score for the proposed feature extraction method and those of the other well-known methods are shown in Appendix A (Table A3). In this case, the classifiers are trained with any two force levels and tested with all force levels. The EMG pattern recognition performances for different training pair of forces are graphically shown in Figure 8, where only a single parameter, the F1 score, is used for simplicity. The experimental results imply that, when the number of training force level is increased, the classifiers improve their pattern recognition performance in recognizing two known force levels used in training and an unknown force level. In this training case, we achieved an improvement in the F1 score by about 10% compared to that of Case 1. In this study, the proposed feature extraction method improves the accuracy, sensitivity, specificity, precision, and the F1 score by 0.57, 1.73, 0.36, 1.66, and 1.74, respectively, when the SVM classifier is trained with low and high force levels and is tested with all force levels. In addition, the obtained p-values between the proposed feature extraction method and each of the existing feature extraction methods considering each classifier are very low and the values are lower than 4.50 × 10 6 , which shows a significant performance improvement by the proposed feature extraction method.

3.7. Training the Classifiers with all Three Force Levels and Testing the Classifiers with All Three Force Levels (Case 4)

In this case, all force levels were used to train and test the classifiers. Then, the EMG pattern recognition performances in terms of accuracy, sensitivity, specificity, precision, and the F1 score were evaluated for all considered feature extraction methods, which are shown in Appendix A (Table A4). The EMG pattern recognition performances are also graphically shown in Figure 9 using the F1 score only. Following the previous trend, the proposed feature extraction method improves the accuracy, sensitivity, specificity, precision, and the F1 score by 0.57, 1.70, 0.33, 1.53, and 1.70, respectively, compared with those obtained from the TSD using the SVM classifier. In this study, the proposed method yields the highest F1 score of 89.06% with the SVM classifier. Therefore, ANOVA is performed between the proposed feature extraction method and each of the existing feature extraction methods for each classifier. The obtained p-values are very small, and the values are smaller than 1.21 × 10 2 considering all of the cases, which confirms the significant performance improvement by the proposed feature extraction method.
To compare the amputee-wise performance among all considered feature extraction methods, we used the SVM classifier only since it provides better performance in most cases. The obtained results shown in Figure 10 implies that the proposed feature extraction method yields the highest performance (F1 score) in most of the amputees (except for TR6). However, in some amputees (TR1, TR7, and CG1), TSD yields a performance similar to that of the proposed feature extraction method.

3.8. Computational Time and Memory Size

To measure the computational load for each feature extraction method, we considered computational time and memory size [30,31]. We measured the computational time of each method using an Intel Core i3-7100U CPU with 2.40 GHz processor and 8 GB RAM; we used the MATLAB® 2017a. The recorded computational times shown in Figure 11 demonstrate that the proposed feature extraction method requires the lowest time except for the TD. However, we know that TD offers a very low performance compared to those of the proposed method in all the cases. In addition to the computational time, we also computed the memory size used by each of the feature extraction methods; we used the MATLAB® 2017a function (whos) for this purpose. Figure 12 shows that the proposed feature extraction method requires less memory than those required by TSD and Wavelet. Although TDPSD, TDF, AR-RMS, and TD require less memory than that required by the proposed feature extraction method, their EMG pattern recognition performances are lower than those of the proposed method. Therefore, we claim that the proposed feature extraction method is faster and requires less or compatible memory when recognition performance is taken into account. Therefore, the proposed method is suitable for real-time operation.

4. Discussion

Muscle force variation is a frequently used scenario in daily life. The amount of muscle force for a particular activity is set by the central nervous system (CNS), which is trained from our daily activities since childhood [27,55]. During muscle force variation, the CNS varies the time- and frequency-domain characteristics of the EMG signal, which in turn drastically varies the features that become unsuitable to achieve force-invariant EMG pattern recognition performance. Thus far, it is found that the EMG pattern recognition performance is significantly degraded when unknown force levels are used for testing [21,22,23]. The problem becomes more challenging when we consider amputees rather than intact-limb subjects [35,36].
In this study, we propose an improved force-invariant feature extraction method considering seven nonlinear features along with the CC, which is validated over nine transradial amputees and is compared with those of the six existing feature extraction methods considered. The proposed feature extraction method is an extension of the TSD and the TD [26,34], but it provides improved force-invariant EMG pattern recognition performance compared to those of the original works. In addition, the proposed feature extraction method requires less computational time than those required by other feature extraction methods except for the TD [19,25,26,32,33]. TD requires slightly less computational time than that of the proposed method, but the performance of the TD is very low and it cannot meet the criteria of a satisfactory performance [20]. In addition, the proposed feature extraction method also requires comparatively less memory than that required by the TSD and the Wavelet. Therefore, the proposed feature extraction method may be implemented using a microcontroller [30,31].
The proposed feature extraction method provides improved force-invariant pattern recognition performance due to the use of the higher-order indirect frequency information along with its moments, the nonlinear transformation of signals and the CC. The indirect frequency information of the higher-order differential signal is an important issue since the differential signal makes a nonlinear variation in frequency-domain, which in turn emphasizes the high-frequency EMG signal. Additionally, the nonlinear transformation balances the forces and enhances the separation margin among gestures. Finally, the CC measures the correlation between any two EMG channels, which has a great contribution to the improved force-invariant EMG pattern recognition obtained. The salient feature of CC is that it does not depend on the signal strength of each channel; in fact, it depends on the activity of the underlying active muscle. Hence, the CC values are varied with respect to the gesture only.
It is a challenging task to train the classifier by employing all possible force levels for each gesture such as how we utilize various force levels for each gesture in our daily activities [27]. In this context, the force-invariant EMG pattern recognition performance of our proposed feature extraction method shows good performance when we test the classifier with the gestures of an unknown force level (Case 2 and Case 3). The experimental results reveal that the proposed feature extraction method performs better in force-invariant EMG pattern recognition (Case 2 and Case 3) in terms of all performance evaluating parameters, i.e., accuracy, sensitivity, specificity, precision, and the F1 score. However, in this study, it is also observed that the proposed force-invariant feature extraction method does not yield EMG pattern recognition performance at a satisfactory level; this is due to the deformed structure of muscle of amputees and the lack of their proper training [35,36]. Therefore, the classifier is trained and tested with all force levels (Case 4); however, the robust character of the proposed feature extraction method is that it also performs better than those of existing feature extraction methods. In addition, the proposed feature extraction method performs better for regular EMG pattern recognition performance, when the classifier is trained and tested with one force levels (Case 1). In this training strategy, the performance obtained is much better (about 3 to 4 in the F1 score) than that of the classifier when trained and tested with all force levels (Case 4). The possible reason for degraded performance in Case 4 may be that the muscle activation pattern for each gesture of transradial amputees is not unique and is repetitive; that means that it does not follow the same manner among the force levels. Therefore, an amputee should be trained properly to achieve an improved EMG pattern recognition performance with respect to various force levels.
In this study, the performance of the proposed feature extraction method is evaluated with the LDA, SVM, and KNN classifiers with different training and testing cases. In all cases, the proposed feature extraction method shows consistently improved performance compared to those of existing feature extraction methods, which proves its robustness. Moreover, the lowest p-values between the proposed method and each of the methods also demonstrate the statistical significance of the experimental results. In this research, the LDA and the SVM classifiers show almost equal performances, which are slightly better than that for the KNN. However, the SVM classifier yields the highest F1 score of 89.06% with our proposed feature extraction method when the classifier is trained with three forces. The achieved performance is much better than the original work of the TDPSD, the TSD, and the recently proposed fractal feature set [19,26,56].
In this study, it is also observed that the classifiers yield the highest EMG pattern recognition performance when only medium force level, and combined low and high force levels are used to train the classifiers. It reveals that adjacent forces are highly interrelated, which may privilege the classifier for achieving the highest EMG pattern recognition performance. Therefore, it is suggested to train the classifier with such force levels that each testing force level is highly interrelated.
Another important point to note is that the traumatic amputees provide slightly better EMG pattern recognition performance than congenital amputees since they had intact limbs before the trauma occurred, and for this reason, they have better control over their muscle. However, regardless of the type of amputee, the proposed feature extraction method is promised to provide the highest or very close to the highest performance in all type of amputees.
In this study, we compared our proposed feature extraction method offline with respect to standard datasets collected from [19]. Real-time analysis with other amputees will be performed in future work.

5. Conclusions

In this research, a new time-domain feature extraction method is proposed to obtain improved force-invariant EMG pattern recognition performance. The proposed feature extraction method improves the performance across nine transradial amputees in terms of accuracy, sensitivity, specificity, precision, and F1 score. In addition to improved performance, it requires relatively less computational time and memory than others. In this study, the recently proposed method, TSD, provides the second-best performance after the proposed method, but it requires too much processing time and memory due to its high dimensional feature space. Moreover, Bonferroni-corrected ANOVA implies significant differences between the proposed method and the other methods. Therefore, the proposed feature extraction method is the best option to obtain improved force-invariant myoelectric pattern recognition using a microcontroller.

Author Contributions

Conceptualization, M.J.I., S.A. and M.R.I.; methodology, M.J.I.; software, S.A.; validation, M.J.I., S.A. and M.B.I.R.; formal analysis, M.J.I.; investigation, M.J.I.; resources, S.A., M.B.I.R. and M.R.I.; data curation, M.J.I. and M.A.S.B.; writing—original draft preparation, M.J.I. and F.H.; writing—review and editing, F.H. and M.B.I.R.; visualization, S.A., M.B.I.R. and M.R.I.; supervision, S.A. and M.R.I.; project administration, S.A., M.B.I.R. and M.R.I.; funding acquisition, M.A.S.B., M.B.I.R. and M.J.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financially supported by Xiamen University Malaysia, project number XMUMRF/2018-C2/IECE/0002; by the Information and Communication Technology Division, Ministry of Posts, Telecommunications, and Information Technology, Government of Bangladesh under reference number 56.00.0000.028.33.098.18-219; and by Universiti Kebangsaan Malaysia under grant numbers GP-2019-K017701, KK-2020-011, and MI-2020-002.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The EMG dataset is publicly available on the website of Rami N. Khushaba (https://www.rami-khushaba.com/electromyogram-emg-repository.html accessed on 7 May 2021).

Acknowledgments

The authors would like to show sincere gratitude to Rami N. Khushaba and Ali H. Al-Timemy for making the EMG dataset publicly available on the website of Rami N. Khushaba. The website for the database is https://www.rami-khushaba.com/electromyogram-emg-repository.html (accessed on 7 May 2021).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The EMG Pattern Recognition Performance for Different Training and Testing Cases

Table A1. The EMG pattern recognition performances when the classifiers are trained and tested with the same force level.
Table A1. The EMG pattern recognition performances when the classifiers are trained and tested with the same force level.
ParameterClassifierProposedTSDTDPSDWaveletTDFAR-RMSTD
Training and testing with low forceAccuracyLDA97.86 ± 1.5997.20 ± 1.6596.97 ± 2.5096.49 ± 2.7395.88 ± 2.7195.81 ± 2.6995.17 ± 3.10
SVM97.93 ± 1.7097.36 ± 1.6797.06 ± 2.5796.41 ± 2.9195.88 ± 2.8395.80 ± 2.8895.11 ± 3.19
KNN97.78 ± 1.7897.26 ± 1.8696.87 ± 2.8196.13 ± 3.1395.55 ± 3.0095.27 ± 3.2894.69 ± 3.52
SensitivityLDA93.57 ± 4.7791.60 ± 4.9690.91 ± 7.5089.46 ± 8.2087.65 ± 8.1487.43 ± 8.0685.52 ± 9.31
SVM93.80 ± 5.0992.09 ± 5.0191.19 ± 7.7289.22 ± 8.7487.64 ± 8.4887.39 ± 8.6585.34 ± 9.56
KNN93.34 ± 5.3591.79 ± 5.5990.60 ± 8.4488.40 ± 9.3986.64 ± 8.9985.80 ± 9.8484.06 ± 10.55
SpecificityLDA98.71 ± 0.9298.32 ± 0.9698.21 ± 1.4597.90 ± 1.6097.56 ± 1.5097.47 ± 1.5997.14 ± 1.74
SVM98.75 ± 1.0098.42 ± 0.9898.27 ± 1.5297.86 ± 1.7297.56 ± 1.6297.46 ± 1.7497.12 ± 1.79
KNN98.65 ± 1.0898.33 ± 1.1298.12 ± 1.7197.66 ± 1.8997.32 ± 1.7897.11 ± 2.0496.82 ± 2.06
PrecisionLDA94.48 ± 4.1892.47 ± 4.4291.86 ± 6.8390.52 ± 7.5888.95 ± 7.3388.81 ± 7.6287.07 ± 8.48
SVM94.46 ± 4.4193.03 ± 4.6292.09 ± 7.0490.12 ± 8.1488.83 ± 7.8588.56 ± 8.0886.91 ± 8.73
KNN93.95 ± 4.7892.68 ± 5.1791.43 ± 7.9789.29 ± 8.8987.69 ± 8.6086.91 ± 9.5485.47 ± 10.29
F1 ScoreLDA93.30 ± 4.9191.28 ± 5.0690.69 ± 7.6589.29 ± 8.3287.39 ± 8.1587.14 ± 8.0785.01 ± 9.51
SVM93.64 ± 5.1891.89 ± 5.0591.05 ± 7.8789.07 ± 8.8587.42 ± 8.5287.25 ± 8.6485.01 ± 9.76
KNN93.22 ± 5.4691.60 ± 5.6990.44 ± 8.7288.24 ± 9.5586.39 ± 9.1085.55 ± 10.083.67 ± 10.86
Training and testing with medium forceAccuracyLDA97.89 ± 1.0597.30 ± 1.0396.75 ± 1.6896.21 ± 1.8095.91 ± 1.7996.00 ± 1.9295.13 ± 2.21
SVM97.91 ± 1.0997.33 ± 1.0696.96 ± 1.6896.12 ± 1.8595.95 ± 1.895.85 ± 1.9895.10 ± 2.25
KNN97.65 ± 1.2097.17 ± 1.1596.56 ± 1.8795.75 ± 2.0995.53 ± 2.1795.53 ± 2.2994.55 ± 2.52
SensitivityLDA93.66 ± 3.1691.90 ± 3.0990.25 ± 5.0388.62 ± 5.4087.72 ± 5.3888.00 ± 5.7785.40 ± 6.62
SVM93.72 ± 3.2791.99 ± 3.1990.87 ± 5.0588.36 ± 5.5587.84 ± 5.4087.55 ± 5.9485.31 ± 6.76
KNN92.96 ± 3.6191.52 ± 3.4589.68 ± 5.6187.24 ± 6.2886.59 ± 6.5086.58 ± 6.8783.66 ± 7.57
SpecificityLDA98.82 ± 0.6498.50 ± 0.6198.17 ± 0.9997.82 ± 1.0997.62 ± 1.0897.71 ± 1.1397.18 ± 1.32
SVM98.81 ± 0.6598.49 ± 0.6398.29 ± 0.9997.76 ± 1.1297.61 ± 1.1097.61 ± 1.1697.14 ± 1.36
KNN98.66 ± 0.7298.41 ± 0.6998.05 ± 1.0997.53 ± 1.2697.36 ± 1.3297.42 ± 1.3696.79 ± 1.53
PrecisionLDA94.25 ± 3.1692.87 ± 3.0891.24 ± 4.9789.64 ± 5.4388.88 ± 5.2888.83 ± 5.7686.77 ± 6.29
SVM94.17 ± 3.2192.75 ± 3.1691.62 ± 4.9689.33 ± 5.5588.86 ± 5.3588.49 ± 5.8886.68 ± 6.57
KNN93.45 ± 3.5292.26 ± 3.4590.51 ± 5.5288.08 ± 6.2687.57 ± 6.3987.38 ± 6.9084.97 ± 7.40
F1 ScoreLDA93.55 ± 3.2191.70 ± 3.2290.11 ± 5.0788.51 ± 5.4587.49 ± 5.4287.87 ± 5.8385.12 ± 6.69
SVM93.62 ± 3.3191.85 ± 3.3390.81 ± 5.0888.23 ± 5.6287.65 ± 5.4687.44 ± 5.9685.12 ± 6.82
KNN92.84 ± 3.6791.39 ± 3.5989.58 ± 5.6787.07 ± 6.3786.35 ± 6.5786.44 ± 6.9983.39 ± 7.65
Training and testing with high forceAccuracyLDA97.44 ± 1.1096.69 ± 1.6096.34 ± 1.7595.56 ± 1.7095.40 ± 1.9895.69 ± 1.5094.89 ± 1.88
SVM97.32 ± 1.2296.63 ± 1.6796.32 ± 1.7095.47 ± 1.7795.36 ± 1.9495.52 ± 1.5094.89 ± 1.99
KNN97.13 ± 1.2596.44 ± 1.8695.95 ± 1.9095.10 ± 1.8594.99 ± 2.1695.07 ± 1.7594.39 ± 2.17
SensitivityLDA92.33 ± 3.3090.07 ± 4.8089.01 ± 5.2486.69 ± 5.0986.20 ± 5.9487.07 ± 4.5084.68 ± 5.63
SVM91.97 ± 3.6689.90 ± 5.0188.96 ± 5.0986.41 ± 5.3186.08 ± 5.8186.57 ± 4.5084.68 ± 5.97
KNN91.38 ± 3.7589.31 ± 5.5787.84 ± 5.6985.30 ± 5.5584.97 ± 6.4985.21 ± 5.2483.18 ± 6.51
SpecificityLDA98.54 ± 0.6598.11 ± 0.9197.90 ± 1.0197.43 ± 0.9997.33 ± 1.1797.52 ± 0.9297.06 ± 1.14
SVM98.48 ± 0.7198.07 ± 0.9797.90 ± 0.9997.38 ± 1.0397.31 ± 1.1597.41 ± 0.9297.07 ± 1.20
KNN98.36 ± 0.7297.94 ± 1.0997.67 ± 1.1297.14 ± 1.1097.05 ± 1.3297.13 ± 1.0996.73 ± 1.33
PrecisionLDA92.97 ± 3.1190.97 ± 4.2990.11 ± 4.2187.74 ± 4.7387.35 ± 5.1587.98 ± 4.3886.06 ± 4.90
SVM92.79 ± 3.5290.79 ± 4.6190.03 ± 4.1387.51 ± 4.9387.28 ± 5.0687.46 ± 4.2686.01 ± 5.36
KNN92.24 ± 3.5290.24 ± 5.1188.93 ± 4.8086.31 ± 5.2586.09 ± 6.0186.25 ± 5.0684.48 ± 5.95
F1 ScoreLDA92.07 ± 3.4089.77 ± 4.8788.67 ± 5.1286.37 ± 5.0385.78 ± 5.8786.75 ± 4.4584.23 ± 5.53
SVM91.67 ± 3.7889.58 ± 5.0888.61 ± 4.9486.13 ± 5.2885.70 ± 5.8086.23 ± 4.4184.26 ± 5.94
KNN91.09 ± 3.8488.96 ± 5.7287.47 ± 5.6184.94 ± 5.5484.61 ± 6.4684.89 ± 5.1682.74 ± 6.53
Table A2. The EMG pattern recognition performances when the classifiers are trained with a single force level and tested with all force levels.
Table A2. The EMG pattern recognition performances when the classifiers are trained with a single force level and tested with all force levels.
ParameterClassifierProposedTSDTDPSDWaveletTDFAR-RMSTD
Training with low forceAccuracyLDA89.07 ± 3.1588.04 ± 2.6188.48 ± 2.7887.63 ± 2.9186.52 ± 3.0487.23 ± 2.7186.18 ± 2.92
SVM89.10 ± 2.7988.01 ± 2.5788.57 ± 2.7387.70 ± 2.8386.46 ± 3.0087.28 ± 2.9486.15 ± 2.87
KNN89.02 ± 2.8288.06 ± 2.6488.55 ± 2.6487.57 ± 2.8686.84 ± 2.8287.27 ± 2.8486.50 ± 2.70
SensitivityLDA67.22 ± 9.4464.13 ± 7.8465.43 ± 8.3362.90 ± 8.7459.57 ± 9.1361.69 ± 8.1258.55 ± 8.76
SVM67.29 ± 8.3864.03 ± 7.7165.72 ± 8.1963.11 ± 8.4959.37 ± 8.9961.83 ± 8.8258.46 ± 8.60
KNN67.07 ± 8.4564.17 ± 7.9165.64 ± 7.9262.72 ± 8.5860.53 ± 8.4561.81 ± 8.5159.50 ± 8.11
SpecificityLDA93.72 ± 1.7593.11 ± 1.4993.34 ± 1.5992.87 ± 1.6892.17 ± 1.7792.59 ± 1.5892.00 ± 1.69
SVM93.70 ± 1.6493.11 ± 1.4793.4 ± 1.5392.91 ± 1.7192.07 ± 1.8592.62 ± 1.7591.87 ± 1.75
KNN93.65 ± 1.6593.12 ± 1.5493.36 ± 1.592.82 ± 1.6992.30 ± 1.7092.59 ± 1.7692.11 ± 1.69
PrecisionLDA75.51 ± 5.9072.35 ± 4.9773.37 ± 7.3170.74 ± 6.7267.70 ± 7.9369.21 ± 6.6566.75 ± 7.87
SVM74.63 ± 5.9272.21 ± 5.5173.21 ± 6.6270.49 ± 6.5767.55 ± 7.9869.06 ± 7.4566.55 ± 8.55
KNN74.21 ± 5.9472.06 ± 5.8572.81 ± 6.9669.84 ± 7.0766.83 ± 7.4267.49 ± 7.7165.42 ± 8.22
F1 ScoreLDA67.04 ± 9.0764.00 ± 7.3165.10 ± 7.9562.58 ± 8.4859.23 ± 8.7661.11 ± 7.9058.05 ± 8.48
SVM67.22 ± 8.1264.02 ± 7.1765.40 ± 8.0562.85 ± 8.2159.40 ± 8.4961.52 ± 8.5458.30 ± 8.30
KNN67.07 ± 8.1964.22 ± 7.4165.35 ± 7.8662.47 ± 8.3960.16 ± 8.1461.31 ± 8.3158.96 ± 8.11
Training with medium forceAccuracyLDA91.99 ± 2.3590.86 ± 2.0590.66 ± 2.6390.57 ± 2.4189.20 ± 2.9790.03 ± 2.4088.96 ± 3.03
SVM91.94 ± 2.4490.82 ± 2.0790.78 ± 2.5690.45 ± 2.3989.26 ± 2.8289.91 ± 2.4988.78 ± 2.90
KNN91.89 ± 2.4290.86 ± 1.9290.76 ± 2.6790.27 ± 2.5689.22 ± 3.0689.81 ± 2.6588.76 ± 2.92
SensitivityLDA75.97 ± 7.0672.58 ± 6.1471.97 ± 7.8871.70 ± 7.2267.61 ± 8.9170.10 ± 7.2066.89 ± 9.09
SVM75.81 ± 7.3372.46 ± 6.2172.35 ± 7.6871.34 ± 7.1767.79 ± 8.4769.74 ± 7.4666.34 ± 8.70
KNN75.67 ± 7.2672.57 ± 5.7772.27 ± 8.0070.80 ± 7.6867.67 ± 9.1969.44 ± 7.9566.29 ± 8.76
SpecificityLDA95.29 ± 1.4494.61 ± 1.3094.49 ± 1.5994.41 ± 1.4993.55 ± 1.8194.11 ± 1.4493.38 ± 1.85
SVM95.24 ± 1.5094.57 ± 1.3194.54 ± 1.5794.34 ± 1.4993.56 ± 1.7494.02 ± 1.5093.26 ± 1.78
KNN95.21 ± 1.5094.58 ± 1.2394.52 ± 1.6594.22 ± 1.6193.52 ± 1.9193.95 ± 1.6293.23 ± 1.83
PrecisionLDA78.70 ± 5.9375.77 ± 4.9175.12 ± 6.7374.03 ± 6.5470.41 ± 8.2672.37 ± 7.1269.18 ± 8.73
SVM78.57 ± 6.0275.71 ± 4.7675.05 ± 6.9473.45 ± 6.4970.63 ± 7.9072.03 ± 7.1769.03 ± 8.24
KNN78.27 ± 6.1075.55 ± 4.5174.69 ± 7.3972.77 ± 7.0670.11 ± 8.5171.31 ± 8.1768.33 ± 8.51
F1 ScoreLDA75.83 ± 6.8972.47 ± 5.9471.75 ± 7.5371.44 ± 6.9267.42 ± 8.6869.94 ± 7.0866.64 ± 8.90
SVM75.76 ± 7.0872.46 ± 5.8572.13 ± 7.5571.09 ± 6.9167.61 ± 8.1869.61 ± 7.3466.09 ± 8.46
KNN75.61 ± 7.0572.54 ± 5.4872.08 ± 7.7870.53 ± 7.4767.52 ± 8.9569.20 ± 7.9866.01 ± 8.70
Training with high forceAccuracyLDA89.93 ± 2.2688.76 ± 2.2188.31 ± 2.7088.11 ± 2.2987.20 ± 2.4787.92 ± 2.6586.24 ± 2.90
SVM89.72 ± 2.1188.65 ± 2.0388.21 ± 2.6188.04 ± 2.1087.32 ± 2.4787.76 ± 2.4486.40 ± 2.73
KNN89.53 ± 2.3888.47 ± 2.1687.95 ± 2.7687.55 ± 2.1886.76 ± 2.5587.19 ± 2.4685.80 ± 2.63
SensitivityLDA69.79 ± 6.7966.28 ± 6.6464.93 ± 8.0964.32 ± 6.8861.60 ± 7.4263.76 ± 7.9458.73 ± 8.70
SVM69.17 ± 6.3365.94 ± 6.0964.64 ± 7.8464.12 ± 6.2961.95 ± 7.4263.28 ± 7.3259.21 ± 8.19
KNN68.60 ± 7.1365.42 ± 6.4763.84 ± 8.2762.64 ± 6.5560.29 ± 7.6661.57 ± 7.3957.40 ± 7.88
SpecificityLDA93.90 ± 1.3293.18 ± 1.2992.85 ± 1.6392.75 ± 1.3492.11 ± 1.5792.61 ± 1.5791.56 ± 1.69
SVM93.77 ± 1.2393.10 ± 1.2092.76 ± 1.5792.67 ± 1.2892.12 ± 1.5892.50 ± 1.4691.59 ± 1.65
KNN93.63 ± 1.4192.97 ± 1.3192.54 ± 1.6892.35 ± 1.3291.78 ± 1.6692.12 ± 1.4991.22 ± 1.60
PrecisionLDA74.84 ± 5.3072.18 ± 4.8970.51 ± 7.2369.12 ± 6.6666.68 ± 7.4668.12 ± 7.1463.70 ± 8.82
SVM74.36 ± 4.8971.89 ± 4.1170.85 ± 6.4768.84 ± 6.3266.73 ± 7.2068.17 ± 6.5865.00 ± 8.29
KNN73.82 ± 5.3471.18 ± 5.1070.60 ± 7.2167.79 ± 6.4666.36 ± 8.1166.38 ± 7.1863.85 ± 8.82
F1 ScoreLDA69.10 ± 6.6365.67 ± 6.4263.63 ± 8.5163.35 ± 7.1560.46 ± 7.7062.99 ± 8.1757.69 ± 8.79
SVM68.43 ± 5.9365.27 ± 5.5763.43 ± 8.1263.16 ± 6.5160.99 ± 7.6562.58 ± 7.4358.42 ± 8.30
KNN68.05 ± 6.8064.86 ± 6.2162.72 ± 8.5461.80 ± 6.8459.23 ± 7.9160.77 ± 7.6556.60 ± 8.07
Table A3. The EMG pattern recognition performances when the classifiers are trained with two force levels and tested with all force levels.
Table A3. The EMG pattern recognition performances when the classifiers are trained with two force levels and tested with all force levels.
ParameterClassifierProposedTSDTDPSDWaveletTDFAR-RMSTD
Training with low and medium forcesAccuracyLDA94.21 ± 1.8393.30 ± 1.6693.06 ± 2.3492.80 ± 2.2391.53 ± 2.6992.06 ± 1.8691.05 ± 2.52
SVM94.20 ± 1.8493.22 ± 1.6393.12 ± 2.3292.75 ± 2.2691.76 ± 2.5992.02 ± 2.1691.12 ± 2.82
KNN93.90 ± 1.9793.03 ± 1.7492.85 ± 2.4792.37 ± 2.4691.49 ± 2.8191.72 ± 2.2890.86 ± 2.90
SensitivityLDA82.64 ± 5.5079.90 ± 4.9779.18 ± 7.0378.40 ± 6.7074.60 ± 8.0776.18 ± 5.5973.14 ± 7.57
SVM82.60 ± 5.5279.66 ± 4.8979.37 ± 6.9578.24 ± 6.7975.28 ± 7.7676.06 ± 6.4873.36 ± 8.45
KNN81.70 ± 5.9079.09 ± 5.2278.54 ± 7.4277.11 ± 7.3874.47 ± 8.4475.15 ± 6.8572.57 ± 8.69
SpecificityLDA96.64 ± 1.0596.07 ± 0.9595.95 ± 1.3795.79 ± 1.3195.00 ± 1.6095.32 ± 1.1194.68 ± 1.50
SVM96.63 ± 1.0696.03 ± 0.9295.97 ± 1.3595.75 ± 1.3495.12 ± 1.5295.30 ± 1.2694.72 ± 1.67
KNN96.44 ± 1.1495.90 ± 1.0195.80 ± 1.4695.51 ± 1.4894.93 ± 1.7195.09 ± 1.3994.53 ± 1.76
PrecisionLDA84.45 ± 4.9881.95 ± 4.4981.28 ± 6.7980.19 ± 6.3776.89 ± 7.8977.98 ± 5.7275.26 ± 7.81
SVM84.37 ± 4.9681.8 ± 4.2781.28 ± 6.7080.03 ± 6.4377.24 ± 7.4477.90 ± 6.1875.55 ± 8.14
KNN83.29 ± 5.5980.98 ± 4.9380.24 ± 7.3478.53 ± 7.4175.91 ± 8.5876.34 ± 7.1873.94 ± 9.00
F1 ScoreLDA82.63 ± 5.4079.91 ± 4.9279.13 ± 6.9678.30 ± 6.5974.64 ± 7.8576.11 ± 5.5873.04 ± 7.57
SVM82.60 ± 5.3979.72 ± 4.7579.36 ± 6.8578.21 ± 6.6575.26 ± 7.6276.05 ± 6.3673.34 ± 8.35
KNN81.69 ± 5.8379.10 ± 5.1678.49 ± 7.3777.02 ± 7.3574.38 ± 8.3875.02 ± 6.9072.32 ± 8.85
Training with low and high forcesAccuracyLDA95.34 ± 1.7094.80 ± 1.6194.17 ± 2.1293.39 ± 2.1292.55 ± 2.6492.81 ± 2.2491.57 ± 2.86
SVM95.37 ± 1.7894.80 ± 1.7094.25 ± 2.1693.30 ± 2.1792.49 ± 2.7492.77 ± 2.3591.60 ± 2.89
KNN94.98 ± 1.9294.41 ± 1.8793.72 ± 2.4092.58 ± 2.4891.77 ± 2.9591.92 ± 2.6090.73 ± 3.16
SensitivityLDA86.03 ± 5.1084.41 ± 4.8282.52 ± 6.3580.18 ± 6.3577.66 ± 7.9378.43 ± 6.7174.72 ± 8.58
SVM86.12 ± 5.3484.39 ± 5.1182.74 ± 6.4979.89 ± 6.5277.47 ± 8.2378.31 ± 7.0674.81 ± 8.68
KNN84.94 ± 5.7583.23 ± 5.6081.16 ± 7.2177.74 ± 7.4375.32 ± 8.8475.75 ± 7.7972.20 ± 9.47
SpecificityLDA97.25 ± 1.0096.90 ± 0.9596.55 ± 1.2496.07 ± 1.2495.53 ± 1.6195.67 ± 1.3494.94 ± 1.71
SVM97.26 ± 1.0796.90 ± 1.0296.58 ± 1.2696.01 ± 1.2995.49 ± 1.6895.65 ± 1.4294.97 ± 1.74
KNN97.02 ± 1.1696.66 ± 1.1296.25 ± 1.4295.57 ± 1.5095.03 ± 1.8395.12 ± 1.5994.41 ± 1.93
PrecisionLDA86.81 ± 5.1185.22 ± 4.7883.61 ± 6.2981.16 ± 6.3678.61 ± 8.0479.39 ± 6.6076.00 ± 8.62
SVM86.92 ± 5.3785.26 ± 5.0383.85 ± 6.3180.84 ± 6.5578.54 ± 8.2579.33 ± 6.9176.15 ± 8.63
KNN85.69 ± 5.8284.00 ± 5.7382.11 ± 7.1878.63 ± 7.5576.30 ± 9.0876.61 ± 7.8173.37 ± 9.63
F1 ScoreLDA85.92 ± 5.1784.27 ± 4.9082.33 ± 6.4380.06 ± 6.3977.49 ± 8.0278.31 ± 6.7574.51 ± 8.74
SVM86.05 ± 5.3884.31 ± 5.1582.62 ± 6.5279.82 ± 6.5377.43 ± 8.2378.27 ± 7.0674.79 ± 8.72
KNN84.84 ± 5.8483.11 ± 5.7281.02 ± 7.2677.63 ± 7.5275.21 ± 8.9575.64 ± 7.8972.08 ± 9.63
Training with medium and high forcesAccuracyLDA94.27 ± 1.8393.20 ± 1.5493.00 ± 2.2392.57 ± 2.2491.71 ± 2.8792.06 ± 2.2990.82 ± 2.78
SVM94.18 ± 1.8693.24 ± 1.4993.16 ± 2.3092.45 ± 2.2991.95 ± 2.8891.92 ± 2.2991.11 ± 2.72
KNN93.88 ± 1.9192.95 ± 1.6092.82 ± 2.3491.97 ± 2.4391.24 ± 3.0691.23 ± 2.4590.29 ± 2.93
SensitivityLDA82.81 ± 5.5079.60 ± 4.6179.01 ± 6.7077.72 ± 6.7175.14 ± 8.6276.18 ± 6.8672.46 ± 8.33
SVM82.53 ± 5.5879.71 ± 4.4879.49 ± 6.9177.34 ± 6.8775.86 ± 8.6475.75 ± 6.8773.33 ± 8.15
KNN81.64 ± 5.7278.86 ± 4.8078.46 ± 7.0175.91 ± 7.3073.71 ± 9.1873.68 ± 7.3570.87 ± 8.78
SpecificityLDA96.62 ± 1.0995.95 ± 0.9495.83 ± 1.3395.55 ± 1.3895.00 ± 1.8095.24 ± 1.3794.44 ± 1.72
SVM96.55 ± 1.1195.97 ± 0.9195.92 ± 1.3995.48 ± 1.4195.13 ± 1.7995.16 ± 1.3894.61 ± 1.69
KNN96.37 ± 1.1595.79 ± 0.9995.70 ± 1.4295.18 ± 1.5094.68 ± 1.9394.71 ± 1.4994.09 ± 1.83
PrecisionLDA84.54 ± 4.8181.72 ± 3.9480.95 ± 6.4279.06 ± 6.7276.90 ± 8.2977.50 ± 6.8374.29 ± 8.25
SVM84.38 ± 4.7081.79 ± 3.6681.37 ± 6.5178.77 ± 6.6477.42 ± 8.3377.17 ± 6.8274.92 ± 8.24
KNN83.43 ± 4.9480.95 ± 4.0580.12 ± 6.8177.19 ± 7.1775.29 ± 9.0274.93 ± 7.6272.51 ± 8.95
F1 ScoreLDA82.65 ± 5.4779.42 ± 4.5878.65 ± 6.8577.41 ± 6.8574.71 ± 8.7575.86 ± 7.1872.07 ± 8.49
SVM82.40 ± 5.5079.57 ± 4.4179.18 ± 6.9977.10 ± 6.9275.49 ± 8.7275.48 ± 7.0972.96 ± 8.36
KNN81.54 ± 5.6178.71 ± 4.7778.10 ± 7.1375.64 ± 7.3773.36 ± 9.3273.34 ± 7.6670.47 ± 9.07
Table A4. The EMG pattern recognition performances when the classifiers are trained and tested with all force levels.
Table A4. The EMG pattern recognition performances when the classifiers are trained and tested with all force levels.
ParameterClassifierProposedTSDTDPSDWaveletTDFAR-RMSTD
AccuracyLDA96.30 ± 1.5295.75 ± 1.3895.28 ± 1.9494.45 ± 2.0393.69 ± 2.6093.89 ± 2.0292.71 ± 2.74
SVM96.37 ± 1.6095.80 ± 1.4295.34 ± 2.0394.37 ± 2.1493.78 ± 2.6293.98 ± 2.0992.91 ± 2.77
KNN95.88 ± 1.7795.33 ± 1.6894.79 ± 2.2893.62 ± 2.5192.95 ± 2.9993.07 ± 2.4891.88 ± 3.10
SensitivityLDA88.89 ± 4.5587.24 ± 4.1385.83 ± 5.8183.34 ± 6.1081.07 ± 7.8181.68 ± 6.0678.12 ± 8.22
SVM89.11 ± 4.8087.41 ± 4.2586.02 ± 6.0983.10 ± 6.4381.35 ± 7.8781.93 ± 6.2878.72 ± 8.30
KNN87.63 ± 5.3286.00 ± 5.0484.36 ± 6.8580.86 ± 7.5278.84 ± 8.9679.21 ± 7.4375.65 ± 9.31
SpecificityLDA97.82 ± 0.9097.51 ± 0.8197.22 ± 1.1496.71 ± 1.2396.24 ± 1.6096.36 ± 1.2295.63 ± 1.67
SVM97.86 ± 0.9597.53 ± 0.8397.26 ± 1.2096.66 ± 1.2996.29 ± 1.5996.41 ± 1.2795.77 ± 1.67
KNN97.56 ± 1.0797.24 ± 1.0196.91 ± 1.3596.19 ± 1.5495.76 ± 1.8595.84 ± 1.5295.11 ± 1.90
PrecisionLDA89.31 ± 4.5087.85 ± 4.0486.50 ± 5.7983.86 ± 6.1681.59 ± 8.0282.20 ± 6.1978.79 ± 8.44
SVM89.54 ± 4.7188.01 ± 4.1386.71 ± 5.9683.62 ± 6.4681.90 ± 7.8682.43 ± 6.3179.36 ± 8.34
KNN88.01 ± 5.3186.51 ± 5.0484.92 ± 6.8681.21 ± 7.7379.32 ± 9.1279.56 ± 7.6476.19 ± 9.59
F1 ScoreLDA88.81 ± 4.5887.16 ± 4.1785.69 ± 5.8583.20 ± 6.1680.86 ± 7.9181.54 ± 6.1577.90 ± 8.31
SVM89.06 ± 4.8187.36 ± 4.2785.93 ± 6.0983.00 ± 6.4481.23 ± 7.8981.85 ± 6.3378.59 ± 8.35
KNN87.56 ± 5.3785.93 ± 5.0984.24 ± 6.8980.69 ± 7.6478.67 ± 9.0979.04 ± 7.5875.44 ± 9.50

References

  1. Chowdhury, R.H.; Reaz, M.B.I.; Bin Mohd Ali, M.A.; Bakar, A.A.A.; Chellappan, K.; Chang, T.G. Surface electromyography signal processing and classification techniques. Sensors 2013, 13, 12431–12466. [Google Scholar] [CrossRef] [PubMed]
  2. Reaz, M.B.I.; Hussain, M.S.; Mohd-Yasin, F. Techniques of EMG signal analysis: Detection, processing, classification and applications. Biol. Proced. Online. 2006, 8, 11–35. [Google Scholar] [CrossRef] [Green Version]
  3. Ng, C.L.; Reaz, M.B.I.; Chowdhury, M.E.H. A low noise capacitive electromyography monitoring system for remote healthcare applications. IEEE Sens. J. 2020, 20, 3333–3342. [Google Scholar] [CrossRef]
  4. Haque, F.; Reaz, M.B.I.; Ali, S.H.M.; Arsad, N.; Chowdhury, M.E.H. Performance analysis of noninvasive electrophysiological methods for the assessment of diabetic sensorimotor polyneuropathy in clinical research: A systematic review and meta-analysis with trial sequential analysis. Sci. Rep. 2020, 10, 21770. [Google Scholar] [CrossRef] [PubMed]
  5. Ng, C.L.; Reaz, M.B.I. Evolution of a capacitive electromyography contactless biosensor: Design and modelling techniques. Meas. J. Int. Meas. Confed. 2019, 145, 460–471. [Google Scholar] [CrossRef]
  6. Ng, C.L.; Reaz, M.B.I. Impact of skin-electrode capacitance on the performance of cemg biosensor. IEEE Sens. J. 2017, 17, 2636–2637. [Google Scholar] [CrossRef]
  7. Ng, C.L.; Reaz, M.B.I. Characterization of textile-insulated capacitive biosensors. Sensors 2017, 17, 574. [Google Scholar] [CrossRef] [Green Version]
  8. Roche, A.D.; Rehbaum, H.; Farina, D.; Aszmann, O.C. Prosthetic myoelectric control strategies: A clinical perspective. Curr. Surg. Rep. 2014, 2, 1–11. [Google Scholar] [CrossRef]
  9. Webster, G. The bionic hand with a human touch. CNN 2013. Available online: https://edition.cnn.com/2013/02/01/tech/bionic-hand-ilimb-prosthetic/index.html (accessed on 7 May 2021).
  10. Yao, B.; Peng, Y.; Zhang, X.; Zhang, Y.; Zhou, P.; Pu, J. The influence of common component on myoelectric pattern recognition. J. Int. Med. Res. 2020, 48. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Powar, O.S.; Chemmangat, K. Reducing the effect of wrist variation on pattern recognition of Myoelectric hand prostheses control through dynamic time warping. Biomed. Signal Process. Control 2020, 55, 101626. [Google Scholar] [CrossRef]
  12. Fougner, A.; Scheme, E.; Chan, A.D.C.; Englehart, K.; Stavdahl, Ø. Resolving the limb position effect in myoelectric pattern recognition. IEEE Trans. Neural Syst. Rehabil. Eng. 2011, 19, 644–651. [Google Scholar] [CrossRef] [Green Version]
  13. Khushaba, R.N.; Takruri, M.; Miro, J.V.; Kodagoda, S. Towards limb position invariant myoelectric pattern recognition using time-dependent spectral features. Neural Netw. 2014, 55, 42–58. [Google Scholar] [CrossRef]
  14. Young, A.J.; Hargrove, L.J.; Kuiken, T.A. The effects of electrode size and orientation on the sensitivity of myoelectric pattern recognition systems to electrode shift. IEEE Trans. Biomed. Eng. 2011, 58, 2537–2544. [Google Scholar] [PubMed] [Green Version]
  15. Young, A.J.; Hargrove, L.J.; Kuiken, T.A. Improving myoelectric pattern recognition robustness to electrode shift by changing interelectrode distance and electrode configuration. IEEE Trans. Biomed. Eng. 2012, 59, 645–652. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. He, J.; Sheng, X.; Zhu, X.; Jiang, N. Position identification for robust myoelectric control against electrode shift. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 3121–3128. [Google Scholar] [CrossRef]
  17. Lorrain, T.; Jiang, N.; Farina, D. Influence of the training set on the accuracy of surface EMG classification in dynamic contractions for the control of multifunction prostheses. J. Neuroeng. Rehabil. 2011, 8, 25. [Google Scholar] [CrossRef] [Green Version]
  18. Asogbon, M.G.; Samuel, O.W.; Geng, Y.; Oluwagbemi, O.; Ning, J.; Chen, S.; Ganesh, N.; Feng, P.; Li, G. Towards resolving the co-existing impacts of multiple dynamic factors on the performance of EMG-pattern recognition based prostheses. Comput. Methods Programs Biomed. 2020, 184, 105278. [Google Scholar] [CrossRef]
  19. Al-Timemy, A.H.; Khushaba, R.N.; Bugmann, G.; Escudero, J. Improving the performance against force variation of EMG controlled multifunctional upper-limb prostheses for transradial amputees. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 24, 650–661. [Google Scholar]
  20. Scheme, E.; Englehart, K. Electromyogram pattern recognition for control of powered upper-limb prostheses: State of the art and challenges for clinical use. J. Rehabil. Res. Dev. 2011, 48, 643–660. [Google Scholar]
  21. Onay, F.; Mert, A. Phasor represented EMG feature extraction against varying contraction level of prosthetic control. Biomed. Signal Process. Control 2020, 59, 101881. [Google Scholar] [CrossRef]
  22. Calvert, T.W.; Chapman, A.E. The relationship between the surface EMG and force transients in muscle: Simulation and experimental studies. Proc. IEEE 1977, 65, 682–689. [Google Scholar] [CrossRef]
  23. Hof, A.L. The relationship between electromyogram and muscle force. Sportverletz. –Sportschaden. 1997, 11, 79–86. [Google Scholar] [CrossRef]
  24. Tkach, D.; Huang, H.; Kuiken, T.A. Study of stability of time-domain features for electromyographic pattern recognition. J. Neuroeng. Rehabil. 2010, 7, 1–13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Huang, Y.; Englehart, K.B.; Hudgins, B.; Chan, A.D.C. A Gaussian mixture model based classification scheme for myoelectric control of powered upper limb prostheses. IEEE Trans. Biomed. Eng. 2005, 52, 1801–1811. [Google Scholar] [CrossRef]
  26. Khushaba, R.N.; Al-Timemy, A.H.; Al-Ani, A.; Al-Jumaily, A. A framework of temporal-spatial descriptors-based feature extraction for improved myoelectric pattern recognition. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1821–1831. [Google Scholar] [CrossRef]
  27. He, J.; Zhang, D.; Sheng, X.; Li, S.; Zhu, X. Invariant surface EMG feature against varying contraction level for myoelectric control based on muscle coordination. IEEE J. Biomed. Heal. Inform. 2015, 19, 874–882. [Google Scholar] [CrossRef]
  28. Simao, M.; Mendes, N.; Gibaru, O.; Neto, P. A review on electromyography decoding and pattern recognition for human-machine interaction. IEEE Access 2019, 7, 39564–39582. [Google Scholar] [CrossRef]
  29. Li, K.; Zhang, J.; Wang, L.; Zhang, M.; Li, J.; Bao, S. A review of the key technologies for sEMG-based human-robot interaction systems. Biomed. Signal Process. Control 2020, 62, 102074. [Google Scholar] [CrossRef]
  30. Remeseiro, B.; Bolon-Canedo, V. A review of feature selection methods in medical applications. Comput. Biol. Med. 2019, 112. [Google Scholar] [CrossRef]
  31. Samuel, O.W.; Zhou, H.; Li, X.; Wang, H.; Zhang, H.; Sangaiah, A.K.; Li, G. Pattern recognition of electromyography signals based on novel time domain features for amputees’ limb motion classification. Comput. Electr. Eng. 2018, 67, 646–655. [Google Scholar] [CrossRef]
  32. Khushaba, R.N.; Kodagoda, S.; Lal, S.; Dissanayake, G. Driver drowsiness classification using fuzzy wavelet-packet-based feature-extraction algorithm. IEEE Trans. Biomed. Eng. 2011, 58, 121–131. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Du, Y.C.; Lin, C.H.; Shyu, L.Y.; Chen, T. Portable hand motion classifier for multi-channel surface electromyography recognition using grey relational analysis. Expert Syst. Appl. 2010, 37, 4283–4291. [Google Scholar] [CrossRef]
  34. Hudgins, B.; Parker, P.; Scott, R.N. A new strategy for multifunction myoelectric control. IEEE Trans. Biomed. Eng. 1993, 40, 82–94. [Google Scholar] [CrossRef]
  35. Zhu, Z.; Martinez-Luna, C.; Li, J.; McDonald, B.E.; Dai, C.; Huang, X.; Farrell, T.R.; Clancy, E.A. EMG-Force and EMG-Target models during force-varying bilateral hand-wrist contraction in able-bodied and limb-absent subjects. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 3040–3050. [Google Scholar] [CrossRef]
  36. Pan, L.; Zhang, D.; Sheng, X.; Zhu, X. Improving myoelectric control for amputees through transcranial direct current stimulation. IEEE Trans. Biomed. Eng. 2015, 62, 1927–1936. [Google Scholar] [CrossRef]
  37. Hjorth, B. EEG analysis based on time domain properties. Electroencephalogr. Clin. Neurophysiol. 1970, 29, 306–310. [Google Scholar] [CrossRef]
  38. Farrell, T.R.; Weir, R.F. The optimal controller delay for myoelectric prostheses. IEEE Trans. Neural Syst. Rehabil. Eng. 2007, 15, 111–118. [Google Scholar] [CrossRef]
  39. De Luca, C.J.; Donald Gilmore, L.; Kuznetsov, M.; Roy, S.H. Filtering the surface EMG signal: Movement artifact and baseline noise contamination. J. Biomech. 2010, 43, 1573–1579. [Google Scholar] [CrossRef]
  40. Yacoub, S.; Raoof, K. Power line interference rejection from surface electromyography signal using an adaptive algorithm. Irbm 2008, 29, 231–238. [Google Scholar] [CrossRef]
  41. Cai, D.; He, X.; Han, J. SRDA: An efficient algorithm for large scale discriminant analysis. IEEE Trans. Knowl. Data Eng. 2008, 20, 1–12. [Google Scholar]
  42. Triwiyanto, T.; Pawana, I.P.A.; Purnomo, M.H. An improved performance of deep learning based on convolution neural network to classify the hand motion by evaluating hyper parameter. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 1678–1688. [Google Scholar] [CrossRef]
  43. Yamanoi, Y.; Ogiri, Y.; Kato, R. EMG-based posture classification using a convolutional neural network for a myoelectric hand. Biomed. Signal Process. Control 2020, 55, 101574. [Google Scholar] [CrossRef]
  44. Paleari, M.; Di Girolamo, M.; Celadon, N.; Favetto, A.; Ariano, P. On optimal electrode configuration to estimate hand movements from forearm surface electromyography. In Proceedings of the Annual International Conferences IEEE Engineering in Medicine and Biology Society, Osaka, Japan, 3–7 July 2013; pp. 6086–6089. [Google Scholar]
  45. Pan, L.; Zhang, D.; Liu, J.; Sheng, X.; Zhu, X. Continuous estimation of finger joint angles under different static wrist motions from surface EMG signals. Biomed. Signal Process. Control 2014, 14, 265–271. [Google Scholar] [CrossRef]
  46. Matsubara, T.; Morimoto, J. Bilinear modeling of EMG signals to extract user-independent features for multiuser myoelectric interface. IEEE Trans. Biomed. Eng. 2013, 60, 2205–2213. [Google Scholar] [CrossRef] [PubMed]
  47. Oskoei, M.A.; Hu, H. Support vector machine-based classification scheme for myoelectric control applied to upper limb. IEEE Trans. Biomed. Eng. 2008, 55, 1956–1965. [Google Scholar]
  48. Khushaba, R.N.; Kodagoda, S.; Takruri, M.; Dissanayake, G. Toward improved control of prosthetic fingers using surface electromyogram (EMG) signals. Expert Syst. Appl. 2012, 39, 10731–10738. [Google Scholar] [CrossRef]
  49. Khushaba, R.N.; Al-Ani, A.; Al-Timemy, A.; Al-Jumaily, A. A fusion of time-domain descriptors for improved myoelectric hand control. In Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence, Athens, Greece, 6–9 December 2016. [Google Scholar]
  50. Pinzón-Arenas, J.O.; Jiménez-Moreno, R.; Rubiano, A. Percentage estimation of muscular activity of the forearm by means of EMG signals based on the gesture recognized using CNN. Sens. Bio-Sens. Res. 2020, 29, 100353. [Google Scholar] [CrossRef]
  51. Al-Timemy, A.H.; Bugmann, G.; Escudero, J.; Outram, N. Classification of finger movements for the dexterous hand prosthesis control with surface electromyography. IEEE J. Biomed. Heal. Inform. 2013, 17, 608–618. [Google Scholar] [CrossRef]
  52. Banerjee, P.; Dehnbostel, F.O.; Preissner, R. Prediction is a balancing act: Importance of sampling methods to balance sensitivity and specificity of predictive models based on imbalanced chemical data sets. Front. Chem. 2018, 6, 1–11. [Google Scholar] [CrossRef]
  53. Samuel, O.W.; Asogbon, M.G.; Geng, Y.; Jiang, N.; Mzurikwao, D.; Zheng, Y.; Wong, K.K.L.; Vollero, L.; Li, G. Decoding movement intent patterns based on spatiotemporal and adaptive filtering method towards active motor training in stroke rehabilitation systems. Neural Comput. Appl. 2021, 33, 4793–4806. [Google Scholar] [CrossRef]
  54. Phinyomark, A.; Limsakul, C.; Phukpattaranont, P. Application of wavelet analysis in EMG feature extraction for pattern classification. Meas. Sci. Rev. 2011, 11, 45–52. [Google Scholar] [CrossRef]
  55. Li, G.; Li, J.; Ju, Z.; Sun, Y.; Kong, J. A novel feature extraction method for machine learning based on surface electromyography from healthy brain. Neural Comput. Appl. 2019, 31, 9013–9022. [Google Scholar] [CrossRef] [Green Version]
  56. Iqbal, N.V.; Subramaniam, K.; Asmi, P.S. Robust feature sets for contraction level invariant control of upper limb myoelectric prosthesis. Biomed. Signal Process. Control 2019, 51, 90–96. [Google Scholar] [CrossRef]
Figure 2. The position of electrodes for EMG data acquisition from an amputee. Source: Electromyogram (EMG) repository (rami-khushaba.com) (accessed on 07 May 2021).
Figure 2. The position of electrodes for EMG data acquisition from an amputee. Source: Electromyogram (EMG) repository (rami-khushaba.com) (accessed on 07 May 2021).
Diagnostics 11 00843 g002
Figure 3. The impact of muscle force variation on a gesture (thumb flexion), where (a) presents the raw EMG signal and (b) presents normalized feature (f1).
Figure 3. The impact of muscle force variation on a gesture (thumb flexion), where (a) presents the raw EMG signal and (b) presents normalized feature (f1).
Diagnostics 11 00843 g003
Figure 4. The impact of the nonlinear transformation of seven features on a 2D-feature space: (a) original feature space and (b) nonlinearly transformed feature space.
Figure 4. The impact of the nonlinear transformation of seven features on a 2D-feature space: (a) original feature space and (b) nonlinearly transformed feature space.
Diagnostics 11 00843 g004
Figure 5. The impact of window length on clustering performance, where (ah) stand for window lengths of 50 ms, 100 ms, 150 ms, 200 ms, 250 ms, 300 ms, 350 ms, and 400 ms, respectively.
Figure 5. The impact of window length on clustering performance, where (ah) stand for window lengths of 50 ms, 100 ms, 150 ms, 200 ms, 250 ms, 300 ms, 350 ms, and 400 ms, respectively.
Diagnostics 11 00843 g005aDiagnostics 11 00843 g005b
Figure 6. The EMG pattern recognition performances when the training and testing forces are the same, where Tr and Ts indicate training and testing, respectively.
Figure 6. The EMG pattern recognition performances when the training and testing forces are the same, where Tr and Ts indicate training and testing, respectively.
Diagnostics 11 00843 g006
Figure 7. The EMG pattern recognition performances when training the classifiers with a single force level and testing with three force levels, where Tr and Ts indicate training and testing, respectively.
Figure 7. The EMG pattern recognition performances when training the classifiers with a single force level and testing with three force levels, where Tr and Ts indicate training and testing, respectively.
Diagnostics 11 00843 g007
Figure 8. The EMG pattern recognition performances when the training forces are two and the testing forces are three, where Tr and Ts indicate training and testing, respectively.
Figure 8. The EMG pattern recognition performances when the training forces are two and the testing forces are three, where Tr and Ts indicate training and testing, respectively.
Diagnostics 11 00843 g008
Figure 9. The average performances when the classifiers are trained and tested with three forces.
Figure 9. The average performances when the classifiers are trained and tested with three forces.
Diagnostics 11 00843 g009
Figure 10. The amputee-wise performance when the SVM is trained and tested with three forces.
Figure 10. The amputee-wise performance when the SVM is trained and tested with three forces.
Diagnostics 11 00843 g010
Figure 11. The feature extraction time for different feature extraction methods.
Figure 11. The feature extraction time for different feature extraction methods.
Diagnostics 11 00843 g011
Figure 12. The memory size for different feature extraction methods.
Figure 12. The memory size for different feature extraction methods.
Diagnostics 11 00843 g012
Table 1. Different feature extraction methods.
Table 1. Different feature extraction methods.
PaperSubject TypeMuscle Force LevelFeatureClassifierTraining ForceAccuracy
(%)
Comment
Tkach et al. [24]IntactLow and highMean absolute value, zero crossings, slope sign change, waveform length, Wilson amplitude, variance, v-order, log detector, EMG histogram, AR, and cepstrum coefficients.LDALow and high82 with ARTime-domain features are not stable with muscle force variation.
Huang et al. [25]Intact---Mean absolute value, zero crossings, slope sign change, waveform length, AR, and RMSGaussian mixture model---96
AR + RMS
AR and RMS can be grouped for better EMG pattern recognition performance.
Scheme et al. [20]Intact20% to 80% of MVC at 10% intervalTime-domain featuresLDA20% to 80%84Time-domain features are not reliable with muscle force variation.
Al-Timemy et al. [19]AmputeeLow, medium, and highTDPSD includes root squared zero-order, second-order, and fourth-order moments; sparseness; irregularity factor; and waveform length ratioLDAAll90TDPSD improves the performance with muscle force variation.
Khushaba et al. [26]Intact and amputee---TSD, which includes root squared zero-order, second-order, and fourth-order moments; sparseness; irregularity factor; coefficient of variation; and Teager–Kaiser energy operatorLDA---99
(128 channel EMG)
TSD improves the EMG pattern recognition performance
He et al. [27]IntactLow, medium, and highGlobal normalized discrete Fourier transform-based featuresLDAMedium91Force-invariant EMG pattern recognition performance is satisfactory, but the electrode position is specific.
Khushaba et al. [32]Intact
(driver drowsiness detection)
---Symmlet-8 decomposition-based Wavelet features including energy, variance, standard deviation, waveform length, and entropyLDA---97
Performance is better in another field, so the features may be applicable for force-invariant EMG pattern recognition.
Du et al. [33]Intact---Time-domain features (TDF) including the integral of EMG, waveform length, variance, zero-crossing, slope sign change, and Wilson amplitudeGrey relational analysis---96Performance is better, so these features may be utilized for force-invariant EMG pattern recognition.
Hudgin et al. [34]Intact and amputee---Mean absolute value, mean absolute value slope, zero crossings, slope sign change, and waveform lengthNeural Network---91.2 for intact subject and 85.5 for amputeePerformance is not satisfactory for amputees, but the features are fundamental.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Islam, M.J.; Ahmad, S.; Haque, F.; Reaz, M.B.I.; Bhuiyan, M.A.S.; Islam, M.R. Force-Invariant Improved Feature Extraction Method for Upper-Limb Prostheses of Transradial Amputees. Diagnostics 2021, 11, 843. https://doi.org/10.3390/diagnostics11050843

AMA Style

Islam MJ, Ahmad S, Haque F, Reaz MBI, Bhuiyan MAS, Islam MR. Force-Invariant Improved Feature Extraction Method for Upper-Limb Prostheses of Transradial Amputees. Diagnostics. 2021; 11(5):843. https://doi.org/10.3390/diagnostics11050843

Chicago/Turabian Style

Islam, Md. Johirul, Shamim Ahmad, Fahmida Haque, Mamun Bin Ibne Reaz, Mohammad Arif Sobhan Bhuiyan, and Md. Rezaul Islam. 2021. "Force-Invariant Improved Feature Extraction Method for Upper-Limb Prostheses of Transradial Amputees" Diagnostics 11, no. 5: 843. https://doi.org/10.3390/diagnostics11050843

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop