Next Article in Journal
SASC: Secure and Authentication-Based Sensor Cloud Architecture for Intelligent Internet of Things
Previous Article in Journal
Effects of Interferometric Radar Altimeter Errors on Marine Gravity Field Inversion
Previous Article in Special Issue
Validation of a Low-Cost Electromyography (EMG) System via a Commercial and Accurate EMG Device: Pilot Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Real-Time Hand Gesture Recognition Using Surface Electromyography and Machine Learning: A Systematic Literature Review

by
Andrés Jaramillo-Yánez
1,2,*,
Marco E. Benalcázar
1 and
Elisa Mena-Maldonado
1
1
Artificial Intelligence and Computer Vision Research Lab, Department of Informatics and Computer Science, Escuela Politécnica Nacional, Quito 170517, Ecuador
2
School of Science, Royal Melbourne Institute of Technology (RMIT), Melbourne 3000, Australia
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(9), 2467; https://doi.org/10.3390/s20092467
Submission received: 30 November 2019 / Revised: 24 February 2020 / Accepted: 25 February 2020 / Published: 27 April 2020
(This article belongs to the Special Issue EMG Sensors and Applications)

Abstract

:
Today, daily life is composed of many computing systems, therefore interacting with them in a natural way makes the communication process more comfortable. Human–Computer Interaction (HCI) has been developed to overcome the communication barriers between humans and computers. One form of HCI is Hand Gesture Recognition (HGR), which predicts the class and the instant of execution of a given movement of the hand. One possible input for these models is surface electromyography (EMG), which records the electrical activity of skeletal muscles. EMG signals contain information about the intention of movement generated by the human brain. This systematic literature review analyses the state-of-the-art of real-time hand gesture recognition models using EMG data and machine learning. We selected and assessed 65 primary studies following the Kitchenham methodology. Based on a common structure of machine learning-based systems, we analyzed the structure of the proposed models and standardized concepts in regard to the types of models, data acquisition, segmentation, preprocessing, feature extraction, classification, postprocessing, real-time processing, types of gestures, and evaluation metrics. Finally, we also identified trends and gaps that could open new directions of work for future research in the area of gesture recognition using EMG.

1. Introduction

The increase in computing power has brought the presence of many computing devices in the daily life of human beings. A broad spectrum of applications and interfaces have been developed so that humans can interact with them. The interaction with these systems is easier when they tend to be performed in a natural way (i.e., just as humans interact with each other using voice or gestures). Hand Gesture Recognition (HGR) is a significant element of Human–Computer Interaction (HCI), which studies computer technology designed to interpret commands given by humans.
HGR models are human–computer systems that determine what gesture was performed and when a person performed the gesture. Currently, these systems are used, for example, in several applications, such as intelligent prostheses [1,2,3], sign language recognition [4,5], rehabilitation devices [6,7], and device control [8].
HGR models acquire data using, for example, gloves [9], vision sensors [10], inertial measurement units (IMUs) [11], surface electromyography sensors, and combinations of sensors, such as surface electromyography sensors and IMUs [12]. Although there are different options for data acquisition, all of these options have their limitations; for example, gloves and vision sensors cannot be used by amputees; gloves can constrain normal movement, especially in cases involving the manipulation of objects; vision sensors can have occlusion problems, and changes of illumination and changes in the distance between the hands and the sensors; and IMUs and surface electromyography sensors generate noisy data [13,14]. Even though all these devices collect data related to the execution of a hand movement, surface electromyography sensors also extract the intention of the movement. This means that these sensors can also be used with amputees, who cannot execute the movements, but have the intention to do so [15,16].
Surface electromyography, which we will refer to from now on as EMG, is a technique that records the electrical activity of skeletal muscles with surface sensors. This electrical activity is produced from two states of a skeletal muscle. The first state is when a skeletal muscle is at rest, where each of the muscular cells (i.e., muscle fibers) has an electric potential of approximately –80 mV [15]. The second state is when a skeletal muscle is contracted to produce the electric potential that occurs in a motor unit (MU), which is composed of muscle fibers and a motor neuron. These electric potential differences are produced when a motor neuron activates a neuromuscular junction by sending two intracellular action potentials in opposite directions. Then, they are propagated by depolarizing and re-polarizing each one of the muscle fibers [16]. The sum of the intracellular action potentials of all muscle fibers of a motor unit is called a motor unit action potential (MUAP). Therefore, when a skeletal muscle is contracted, the EMG is a linear summation between several trains of MUAPs [15].
There are two types of muscle contractions: static and dynamic. In a static contraction, the lengths of the muscle fibers do not change, and the joints are not in motion, but the muscle fibers still contract, for example, when someone holds his/her hand still or to make the peace sign. While in a dynamic contraction, there are changes in the lengths of the muscle fibers, and the joints are in motion, for example, when someone waves their hand to do the hello gesture [17].
The EMG signals can be modeled as a stochastic process that depends on the two types of contraction described above. First, the mathematical model for a static contraction (MMSC) is a stationary process because the mean and covariance remain approximately the same over time, and the EMG depends solely on muscle force [18]. Consider (1):
E M G ( t ) = i = 1 N s i ( t ) m i ( t ) ,
where N is the number of active MUs, s i ( t ) is the train of impulses that indicate the active moments of each MU, m i ( t ) are the MUAPs of each MU, and * denotes convolution. However, the MMSC can be viewed as a non-stationary process when factors, such as muscular fatigue and temperature affect the EMG [19].
Second, the mathematical model for a dynamic contraction (MMDC) is a non-stationary process, and its mathematical model is similar to the amplitude’s modulation (AM modulation):
E M G ( t ) = a ( t ) w ( t ) + n ( t ) ,
where a ( t ) is a function that indicates the intensity of the EMG signal (i.e., information signal), w ( t ) is a unit-variance Gaussian process representing the stochastic aspect of the EMG (i.e., carrier signal), and n ( t ) is the noise from the sensors and biological signal artifacts [17,20].
The mathematical models of EMG are not used in HGR due to the difficulty of parameter estimation in non-stationary processes. However, machine learning (ML) methods are widely used because ML can infer a solution for non-stationary processes [21] using several techniques; for example, covariate shift techniques [21,22], class-balance change [22], and segmentation in short stationary intervals [23].
HGR using ML is just one approach to myoelectric control [24], which uses EMG signals to extract control signals to command external devices [25,26], for example, prostheses [1], drones [8], input devices for a computer [27], etc. There are other approaches that include conventional amplitude-based control, and the direct extraction of neural code from EMG signals. In conventional amplitude-based control, one EMG channel controls one function of a device (e.g., hand open is assigned to one channel, and hand closed to a second channel). When the amplitude of this EMG exceeds a predefined threshold, this function is activated [28,29,30,31]. The direct extraction of neural code from EMGs is another approach, in which the motor neuron spike trains are decoded from EMG signals to translate into commands [32,33,34,35].
For many applications, HGR models are required to work in real time. A human–computer system works in real time when a user performs an action over the system, and this system gives him/her a response fast enough that it is perceived as instantaneous [25]. Moreover, the response time in a real-time human–computer system is relative to its application and user perception [36]. For this reason, the controller delay, which is the response time of an HGR model, has been widely researched. For instance, a user does not perceive any delay when the controller delay is less than 100 ms in the control of devices, such as a key or a switch [36,37]. In HGR using EMGs, Hudgins & Parker et al. [38] stated that the acceptable computational complexity is limited by the controller delay of the system, which must be kept below 300 ms to reduce the user-perceived lag. This optimal controller delay was generally agreed upon by many researchers [39,40]. However, there have been several optimal controller delays reported in the scientific literature, namely 500 ms [41], and 100–125 ms [42] using a box and blocks test, which is a target achievement test.
Most of the real-time HGR models are evaluated using metrics for machine learning, such as accuracy, recall, precision, F-score, R 2 error, etc. However, this evaluation fails to reflect the performance exhibited in online scenarios as it does not account for the adaptation of users to non-stationary signal features [43,44,45,46,47]. For example, Hargrove et al. [48] demonstrated that the inclusion of transient contractions (i.e., non-stationary signals) in the training data decreases the accuracy, but improves the user performance in a real-time virtual clothespin task. Therefore, in order to evaluate the real-life performance, the real-time HGR models can be evaluated using target achievement tests, such as the box and blocks test [42,49], target achievement control test [50], and Fitts’ law test [51], which is an international standard in HCI (ISO9341-9).
Currently, there are many primary studies regarding real-time HGR models using EMG and ML, which, in several cases, do not have standardized concepts, such as types of models, real-time processing, types of hand gestures, and evaluation metrics. This standardized knowledge is essential for reproducibility and requires a Systematic Literature Review (SLR) of the current primary studies. To the best of our knowledge, there is no SLR regarding these HGR models. Therefore, we developed this SLR to present the state-of-the-art of the real-time HGR models using EMG and ML. Based on this SLR, we make three contributions to the field of HCI. First, we define a standard structure of real-time HGR models. Second, we standardize concepts, such as the types of models, data acquisition, segmentation, preprocessing, feature extraction, classification, postprocessing, real-time processing, types of gestures recognized, and evaluation metrics. Finally, we discuss future work based on the research gaps we identified.
Following this introduction, the article is organized as follows: in Section 2 we describe the methodology used to execute this SLR; in Section 3 we outline the results and the discussion of the data extracted from the primary studies; and Section 4 and Section 5 contain the conclusions and future work respectively.

2. Methodology

We developed an SLR based on the methodology proposed in [52,53], which is comprised of five stages: Research Questions (RQs), Search of Primary Studies, Analysis of Primary Studies, Data Extraction, and Threats to Validity.

2.1. Research Questions

In this stage, we define the following four research questions according to the research goal, which is to investigate the state-of-the-art of real-time HGR models that use EMG and ML:
  • RQ1. What is the structure of real-time HGR models that use EMG and ML?
  • RQ2. What is the controller delay and hardware used by real-time HGR models that use EMG and ML?
  • RQ3. What is the number and type of gestures recognized by real-time HGR models that use EMG and ML?
  • RQ4. What are the results and metrics used to evaluate the real-time HGR models that use EMG and ML?

2.2. Search of Primary Studies

In this stage, we search for the primary studies that can answer the four RQs stated in the previous section. This stage has three parts, which were done manually. In the first part, we selected the literature repositories. In the second part, we extracted the keywords of the RQs, and we developed the search strings using these keywords. Finally, we searched the primary studies in the literature repositories using the search strings.
We used four literature repositories: IEEE Xplore, ACM Digital Library, Science Direct, and Springer. We chose these repositories as they have the most primary studies on real-time HGR models that use EMG and ML and also because these repositories have peer-reviewed papers.
The extracted keywords from the RQs (see Section 2.1) are electromyography, hand gesture recognition, real-time, box and blocks, target achievement control, and Fitts’ law. We, then, added the acronym of electromyography (i.e., “EMG”), and real-time variations: online, real time, on line, and on-line. Therefore, the 11 keywords used in this SLR are electromyography, EMG, hand gesture recognition, real time, real-time, online, on line, on-line, box and blocks, target achievement control, and Fitts’ law. Table 1 shows the 16 Search Strings (SS), which were developed with the combination of these 11 keywords and the Boolean operator “AND”. We do not use the keyword myoelectric control because this SLR is focused on HGR using EMG and ML, which is just one segment of the approaches to myoelectric control (see Section 1).
We looked for the published primary studies from 1 January 2013 to 31 December 2019 (i.e., the last day of search in the literature repositories) using the 16 search strings shown in the Table 1. Table 2 shows the 1485 primary studies, which were found in the four literature repositories, IEEE Xplore: 397, ACM Digital Library: 400, Science Direct: 329, and Springer: 359.
We discarded 1021 duplicated primary studies of the 1485 primary studies (IEEE Xplore: 206, ACM Digital Library: 273, Science Direct: 276, and Springer: 266). Additionally, we added 23 primary studies to this SLR using the snowballing techniques, which identify the articles that have cited the primary studies found in the literature repositories (i.e., forward snowballing), and the articles from their references (i.e., backward snowballing) [54] (see Table 2). Therefore, we obtained 487 primary studies in total. Figure 1 shows the resulting primary studies after each action carried out in the two stages: the search of primary studies and the analysis of primary studies.

2.3. Analysis of Primary Studies

We filtered the 487 primary studies based on the analysis of the titles, abstracts, and conclusions using the inclusion and exclusion criteria, and the assessment questions (see Figure 1). We finally selected 65 primary studies (see Table 3), which were used to answer the four RQs (see Section 2.1).

2.3.1. Inclusion and Exclusion Criteria

We established the Inclusion and Exclusion Criteria based on the RQs (see Section 2.1). These criteria were used to determine if a primary study contributes to answering the RQs. The Table 4 shows the inclusion and exclusion criteria.

2.3.2. Quality Assessment

We defined three assessment questions to evaluate the comprehensiveness, reliability, and applicability of the primary studies. For each question, we established three possible answers with their scores: “Yes” = 1, “Partly” = 0.5, and “No” = 0. Thus, a primary study was rejected if the mean of the three answers is less than 2. The three assessment questions are:
  • Were the research objectives of the primary studies clear?
  • Was the contribution of the primary study clear?
  • Was the structure of the HGR model shown?

2.4. Data Extraction

We extracted the data shown in Table 5 from the 65 selected primary studies (SPS), shown in Table 3. This extraction was performed in order to answer the four RQs (see Section 2.1).

2.5. Threats to Validity

We discuss the following possible threats to the validity of this SLR and the mitigation of these threats: an incomplete selection of the SPS, inaccurate data extraction, and biased quality assessment.

2.5.1. Incomplete Selection of the SPS

There is a possibility that relevant studies have been omitted for two reasons. The literature repositories may not have had all relevant studies for the four RQs, and the search strings may not have been appropriate for the four RQs. However, the authors performed the following three actions to mitigate these two threats: (1) We developed this SLR based on the Kitchenham methodology [52,53], which was shown in Section 2. (2) In this SLR, the four literature repositories and the ten search strings were proposed by the first author, and the second and third authors assessed the relevance of these literature repositories and search strings. The four literature repositories were assessed in accordance with the criterion that these repositories are the most used in the ML area. The ten search strings were assessed based on the criterion that the keywords and the structures of the search strings are relevant to the four RQs. (3) We applied the snowballing techniques [54] to add 14 SPS to the SLR. This task was performed by the first author, and the third author assessed the relevance of these 14 SPS.

2.5.2. Biased Analysis of Primary Studies

The analysis of the primary studies (see Section 2.3) can be biased for two reasons. The inclusion and exclusion criteria may not be relevant to the four RQs, and the SPS may not be comprehensive, reliable, and applicable. To mitigate these two threats, the authors performed the following two actions: (1) The authors developed formal inclusion and exclusion criteria (see Section 2.3.1) and quality assessment criteria (see Section 2.3.2). These criteria were proposed by the first author, and they were assessed by the second and third authors. (2) The first author selected 65 primary studies reading the title, abstract, and conclusions. However, the first author also read the whole study when the title, abstract, and conclusions were not clear. Furthermore, these 65 SPS were assessed by the second and third authors.

2.5.3. Inaccurate Data Extraction

Generally, the data extracted can be inaccurate for two possible problems: unsystematic data extraction, and the data not being relevant to the RQs. To solve these problems, we extracted the data using a systematic methodology based on the four RQs (see Section 2.4). Moreover, the authors made sure that the extracted data answer the four RQs.

3. Results and Discussion

The data extracted from the 65 SPS (see Table 3) are presented and analyzed in five subsections: the study overview subsection and the other four subsections, one per each RQ (see Section 2.1). Although some SPS presented more than one HGR model, we selected the models with the best performance in the evaluation; therefore, we used 65 HGR models for this review.

3.1. Study Overview

The study overview shows a general vision of the settings used in the SPS. Among other data, we decided to extract the publication year and the type of publication. Figure 2a shows the number of SPS per year, which has increased steadily since 2013. Moreover, in Figure 2b, we show that most of the SPS were presented in conferences, also see Table 3.

3.2. Results of the RQ1 (What Is the Structure of Real-Time HGR Models Using EMG and ML?)

We found that the structures of the 65 real-time HGR models are not regular across the studies. However, they have some stages in common, such as Data Acquisition (DA), Segmentation (SEGM), Preprocessing (PREP), Feature Extraction (FE), Classification (CL), and Postprocessing (POSTP). We present a standard structure, considering the frequent stages after they were assembled, the result is illustrated in Figure 3. Note that there are SPS that did not use all stages of the standard structure because Segmentation, Preprocessing, Feature Extraction, and Postprocessing are optional stages (i.e., without them a model is still feasible). Table 6 shows the stages of the standard structure used by the SPS.
Aside from the structure of the models, we identified two types of models: the individual models and the general models. Individual models are trained relying on the gestures (data) of a person and recognize the gestures of that same person. General models are trained with the data of several people and recognize the gestures of any person. We found 44 SPS that developed individual models (SPS 1, SPS 2, SPS 3, SPS 5, SPS 6, SPS 8, SPS 9, SPS 10, SPS 13, SPS 15, SPS 16, SPS 24, SPS 25, SPS 27, SPS 28, SPS 30, SPS 33, SPS 34, SPS 36, SPS 37, SPS 38, SPS 39, SPS 41, SPS 42, SPS 43, SPS 44, SPS 45, SPS 47, SPS 48, SPS 49, SPS 51, SPS 52, SPS 53, SPS 55, SPS 56, SPS 57, SPS 58, SPS 59, SPS 60, SPS 61, SPS 62, SPS 63, SPS 64, and SPS 65), and 11 SPS that developed general models (SPS 7, SPS 11, SPS 17, SPS 22, SPS 23, SPS 26, SPS 31, SPS 32, SPS 35, SPS 40, and SPS 46). The 10 remaining studies do not indicate any type of HGR model. Out of the 11 general models, SPS 35 is the only general model that was evaluated using EMG data from people who did not participate in the training phase. The other 10 general models only used EMG data from people who participated in the training; therefore, it is not possible to conclude that these 10 models are able to recognize gestures of any person.

3.2.1. Data Acquisition

In the Data Acquisition stage, EMGs are acquired from EMG sensors, which can be part of homemade or commercial devices. Table 7 shows the number of sensors, the sampling rates, and the acquisition devices used in the HGR models. We found that 27 HGR models used eight sensors, 21 of them (SPS 2, SPS 3, SPS 4, SPS 7, SPS 8, SPS 9, SPS 13, SPS 17, SPS 18, SPS 19, SPS 20, SPS 34, SPS 35, SPS 36, SPS 40, SPS 44, SPS 46, SPS 47, SPS 52, SPS 56, and SPS 61) used the commercial device Myo armband that has eight sensors with a corresponding sampling rate of 200 Hz, and the other six (SPS 5, SPS 25, SPS 27, SPS 59, SPS 62, and SPS 63) used homemade devices with a similar design to the Myo armband, their sampling rates are 1000 Hz, 960 Hz, 1000 Hz, 1000 Hz, 1200 Hz, and 1000 Hz, respectively.
Additionally, the EMG sampling rate of 16 HGR models (SPS 1, SPS 5, SPS 10, SPS 11, SPS 26, SPS 27, SPS 30, SPS 31, SPS 32, SPS 37, SPS 38, SPS 39, SPS 43, SPS 48, SPS 49, and SPS 55) is 1000 Hz because these SPS indicate that the sampling rate must be at least twice the highest frequency of the EMG, according to the Nyquist sampling theory, and approximately 95% of the signal power in the EMG is below 400–500 Hz [114,115,116]). Table 7 also shows the use of commercial devices, including the Myo armband from Thalmic Labs Inc., the MA300 from Motion Lab Systems Inc., the Bio Radio 150 from Cleveland Medical Devices Inc., the ME6000 from Mega Electronics Ltd., the Analog Front End (ADS1298) from Texas Instruments, the Telemyo 2400T G2 from Noraxon, and the EMG-USB2 from OT Bioelettronica. Furthermore, two models (SPS 43 and SPS 45) use high-density EMG sensors.

3.2.2. Segmentation

EMGs are partitioned into multiple segments or windows using different techniques, such as gesture detection and sliding windowing (see Table 7). Gesture detection computes the beginning and the end of a hand gesture, and returns the EMG that only corresponds to muscle contraction. Therefore, the segment lengths are variable as they depend on the duration of the hand gestures. The sliding windowing techniques partition the EMG into fixed adjacent segments (i.e., adjacent sliding windowing) or fixed overlapping segments (i.e., overlapping sliding windowing) (see Figure 4). By increasing the window length, up to a certain point, the controller delay increases, and also the accuracy of the models increase as more data are collected for recognition [25,40].

3.2.3. Preprocessing

HGR models use preprocessing techniques that transform the EMG into an input signal for Feature Extraction or for the ML algorithm if the structure of the HGR model does not have Feature Extraction (see Table 6). For example, a common preprocessing technique is the use of a Notch Filter at 50 or 60 Hz that eliminates the AC frequency of the powerlines (SPS 10). Other examples include Offset Compensation, Pre-smoothing, Filtering, Rectification, Amplification, and the use of the Teager–Kaiser-Energy Operator (see Table 7). Offset Compensation is a technique that eliminates noise through the compensation of the average value of the EMG:
E M G r a w = ( x 1 , x 2 , , x n )
m e a n ( E M G r a w ) = x ¯ = i = 1 n ( x i ) n
E M G o f f s e t = ( ( x 1 x ¯ ) , ( x 2 x ¯ ) , , ( x n x ¯ ) )
m e a n ( E M G o f f s e t ) = 0 ,
where, x 1 , x 2 , , x n are the raw EMG values, x ¯ is the average value of the signal, and ( x 1 x ¯ ) , ( x 2 x ¯ ) , , ( x n x ¯ ) are the EMG values after the use of offset compensation. Pre-smoothing is a technique that computes the mean of the last m values of the EMG and then sets the mean to the current value x n of the signal:
E M G r a w = ( x 1 , x 2 , , x n )
x n = i = 1 + n m n ( x i ) m ,
where, x 1 , x 2 , , x n are the raw EMG values and x n is the current value that is based on the mean of the m previous values of the raw EMG. Filtering is a technique that removes some unwanted frequencies or an unwanted frequency band from the raw EMG. Rectification transforms the negative values into positive values (e.g., absolute value function). The Teager–Kaiser-Energy Operator increases the signal-to-noise ratio to improve the muscle activity onset detection of a gesture [117]. The most used preprocessing technique is filtering (see Table 7).

3.2.4. Feature Extraction

Feature extraction techniques map the EMG into a feature set. These techniques extract features in different domains, such as time, frequency, time-frequency, space, and fractal. Table 8 shows the domains of the feature extraction techniques used by the models. Most of the real-time HGR models use time-domain features because the controller delay for their computation is less than the controller delay for the computation of features in other domains (see Table 9). The mean absolute value is the most used feature in the 65 studies analyzed.

3.2.5. Classification

In this stage, classifiers generate class labels (i.e., the gestures recognized) from a feature set of the EMG. The classifiers used are support vector machines (SPS 7, SPS 10, SPS 14, SPS 15, SPS 18, SPS 23, SPS 25, SPS 27, SPS 28, SPS 30, SPS 38, SPS 39, SPS 49, SPS 52, SPS 53, SPS 55, and SPS 59), feedforward neural networks (SPS 2, SPS 16, SPS 17, SPS 22, SPS 24, SPS 26, SPS 29, SPS 32, SPS 35, SPS 36, SPS 44, SPS 42, SPS 46, SPS 47, SPS 56, SPS 60, and SPS 61), linear discriminant analysis (SPS 5, SPS 11, SPS 13, SPS 31, SPS 37, SPS 45, SPS 48, SPS 57, SPS 63, SPS 64, and SPS 65), convolutional neural networks (CNN) (SPS 4, SPS 20, SPS 43, and SPS 62), CNN with transfer learning (SPS 34), radial basis function networks (SPS 40), temporal convolutional networks (SPS 41), k-nearest neighbors and dynamic time warping (SPS 8, and SPS 9), collaborative-representation-based classification (SPS 19), k-nearest neighbors (SPS 1), k-nearest neighbors and decision trees (SPS 12), binary tree-support vector machine (SPS 21), vector auto-regressive hierarchical hidden Markov models (SPS 6), Gaussian mixture models and hidden Markov models (SPS 3), quadratic discriminant analysis (SPS 33), fuzzy logic (SPS 50), recurrent neural networks (SPS 51), generalized regression neural networks (SPS 54), and one vs one classifier (58). The most commonly used ML algorithms are support vector machines, feedforward neural networks, and linear discriminant analysis.

3.2.6. Postprocessing

To improve the accuracy of the HGR models, the postprocessing techniques adapt the output of the ML algorithm to the final application. Only 15 out of 65 SPS used postprocessing techniques, such as majority voting (SPS 2, SPS 11, SPS 21, SPS 37 and SPS 43), elimination of consecutive repetitions (SPS 8, SPS 9, SPS 36, and SPS 51), threshold (SPS 35, and SPS 44), and velocity ramps (SPS 60, SPS 63, SPS 64, and SPS 65).
Many works perform an analysis of some of the stages shown in Section 3.2 to determine the best structure to improve the accuracy of the HGR models, for example, data acquisition [39,48,118,119], optimal window length [120], filtering [121,122], feature extraction [123], and classification [124,125] stages. However, the results are inconclusive because the structure of the HGR models depend on the environment in which the models are developed (i.e., the data sets used, the people who participated in the evaluation, the application of the models, etc.)

3.3. Results of the RQ2 (What Is the Controller Delay and Hardware Used by Real-Time HGR Models Using EMG and ML?)

3.3.1. Controller Delay of the HGR Models

The controller delay is the sum of two values, which are the data collection time (DCT) (i.e., window length) and the data analysis time (DAT) [39,42]. In real-time processing, the DCT and DAT should be as short as possible, but the DCT also should allow the HGR model to collect enough EMG data to recognize a hand gesture. For instance, in prosthesis control, the optimal DCT using four EMG sensors with a sampling rate of 1 kHz should be between 150–250 ms [120].
An HGR model using EMG is considered to work in real-time when the response time (i.e., controller delay) is less than the optimal controller delay. There are several optimal controller delays reported in the scientific literature, namely 300 ms [39], 500 ms [41], and 100 ms for fast prosthetic prehensors and 125 ms for slower prosthetic prehensors [42].
In accordance with the Inclusion and Exclusion Criteria (see Section 2.3.1), all 65 HGR models indicate that they are real-time models. However, there are some SPS that did not report the controller delay (i.e., DCT and DAT) of their HGR models. Table 10 shows the DCT and DAT of the SPS.

3.3.2. Hardware Used

The controller delay of the HGR models not only depends on their structure but also on the hardware used to process the models. For example, an HGR model may not work in real-time if the user perceives delays in the HGR response because the device has limited processing capabilities. The same HGR model may also be considered to work in real-time in another device with better processing capabilities. For this reason, when a model is described, it is fundamental to indicate the hardware characteristics of the devices used for running an HGR model. Table 10 shows the two types of hardware used, which are personal computers and embedded systems. Ten HGR models were processed in personal computers, such as laptops, desktops, etc., five HGR models were processed in embedded systems, and the remaining models did not indicate the hardware used.

3.4. Results of the RQ3 (What Is the Number and Type of Gestures Recognized by Real-Time HGR Models Using EMG and ML?)

3.4.1. Number of Gestures Recognized

The number of gestures recognized is the number of classes of an HGR model. There are HGR models that have the same number of gestures, and each model has different gestures. For example, there are two HGR models that recognize four gestures, but the classes of the first model are thumb up, okay, wrist valgus, and wrist varus (SPS 14), and the classes of the second model are hand extension, hand grasp, wrist extension, and thumb flexion (SPS 22). Hence to compare these models, it is important to consider the difference in the gestures as well.

3.4.2. Type of Gestures Recognized

The hand gestures, according to the type of movement, are classified as static and dynamic. A static gesture is made when the skeletal muscles are in constant contraction (i.e., there is no movement during the gesture), and in a dynamic gesture, the skeletal muscles are in contraction, but it is not constant, which indicates that there is movement during the gesture.
The EMG data generated by a gesture has two states: transient and steady. The EMG data in the transient state are generated during the transition from one gesture to another, and the EMG data in the steady state are generated when a gesture is maintained [38]. Moreover, the offline classification of hand gestures using EMG data in the steady state is more accurate than in the transient state as the variance of the EMG data in the transient state varies more (i.e., non-stationary process) than in the steady state over time [40]. However, in the training phase, the inclusion of EMG data in the transient state improves subject performance in a real-time virtual clothespin task [46,48].
Figure 5 presents the EMG data of a person who made a long-term gesture (i.e., gestures that lasted a long time) after a relaxed position or rest gesture. In this figure, the EMG data in the transient state are generated during the transition from the rest gesture to the peace sign, and the EMG data in the steady state are generated when the peace sign is maintained. The short-term gestures (i.e., gestures that lasted only a short time) generate more EMG data in the transient state than in the steady state as most of the time is spent in transitions from one gesture to another (see Figure 6).
The durations of the gestures used by the models are shown in Table 11. This table shows seven aspects about the gestures recognized by the HGR models reviewed in this SLR, such as the number of classes, the number of gestures per person in the training set (NGpPT), the number of people who participated in the training (NPT), the number of gestures per person in the evaluation set (NGpPE), the type of gestures recognized, the state of the EMG data used, and the duration of the gestures (DG). NGpPT, NPT, and DG show the EMG data used to train the individual ( N G p P T × D G ), and general ( N G p P T × N P T × D G ) models. We found that 63 out of 65 HGR models recognized static gestures, and only one HGR model recognized both dynamic and static gestures (SPS 25); moreover, no HGR model recognized only dynamic gestures. Additionally, six SPS used EMG data in the steady state, two SPS used EMG data in the transient state, three SPS used EMG data in the steady and transient states, and the remaining HGR models did not indicate the state of the EMG data. There were 31 out of the 65 HGR models that considered the rest gesture (i.e., the hand does not make any movement) as a class.
Finally, 5 out of the 65 HGR models (SPS 59, SPS 60, SPS 62, SPS 63, and SPS 64) recognized static gestures simultaneously to control multiple degrees of freedom of a prosthesis, which replicates simultaneous movements, such as wrist rotation and grasp to turn a doorknob. The remaining HGR models recognized gestures sequentially.

3.5. Results of the RQ4 (What Are the Metrics Used to Evaluate Real-Time HGR Models Using EMG and ML?)

According to the type of evaluation (see Section 1), we divide the SPS into two groups. HGR models evaluated using metrics for machine learning (56 models), and target achievement tests (nine models).

3.5.1. HGR Models Evaluated Using Metrics for Machine Learning (from SPS 1 to SPS 56)

These 56 HGR models used 13 evaluation metrics (see Table 12), such as accuracy (9), recall (10), precision (11), accuracy per user (12), recall per user (13), precision per user (14), median of the accuracy per user (15), standard deviation of the accuracy per user (16), standard deviation of the accuracy per class (17), standard deviation of each user accuracy (18), standard deviation of the recalls of each class (19), classification error (20), and Kappa index (21). The accuracy is the metric most used, Table 12 shows the evaluation metrics used by these 56 models. The formulas of these evaluation metrics are:
A c c u r a c y = i = 1 u j , k = 1 g n i , j , k i = 1 u j = 1 g k = 1 g n i , j , k
R e c a l l c l a s s ( k ) = i = 1 u n i , k , k i = 1 u j = 1 g n i , j , k
P r e c i s i o n c l a s s ( j ) = i = 1 u n i , j , j i = 1 u k = 1 g n i , j , k
A c c u r a c y u s e r ( i ) = j , k = 1 g n i , j , k j = 1 g k = 1 g n i , j , k
R e c a l l u s e r ( i ) c l a s s ( k ) = n i , k , k j = 1 g n i , j , k
P r e c i s i o n u s e r ( i ) c l a s s ( j ) = n i , j , j k = 1 g n i , j , k
M e d i a n ( A c c u r a c y u s e r ( 1 ) , A c c u r a c y u s e r ( 2 ) , , A c c u r a c y u s e r ( u ) )
S D u s e r s = i = 1 u ( A c c u r a c y u s e r ( i ) A c c u r a c y m o d e l ) 2 u 1
S D c l a s s e s = k = 1 g ( R e c a l l c l a s s ( k ) A c c u r a c y m o d e l ) 2 g 1
S D u s e r ( i ) = k = 1 g ( R e c a l l u s e r ( i ) , c l a s s ( k ) A c c u r a c y u s e r ( i ) ) 2 g 1
S D c l a s s ( k ) = i = 1 u ( R e c a l l u s e r ( i ) , c l a s s ( k ) R e c a l l c l a s s ( k ) ) 2 u 1
A c c u r a c y E r r o r = 1 A c c u r a c y
K a p p a I n d e x = A c c u r a c y ( 1 ( i = 1 u j = 1 g k = 1 g n i , j , k ) 2 ) × i = 1 u a u x = 1 g ( k = 1 g n i , a u x , k ) × ( j = 1 g n i , j , a u x ) 1 ( 1 ( i = 1 u j = 1 g k = 1 g n i , j , k ) 2 ) × i = 1 u a u x = 1 g ( k = 1 g n i , a u x , k ) × ( j = 1 g n i , j , a u x )
where n i , j , k is the number of gestures made by the user i , which were recognized by the model as j but they were k. i ϵ I = i 1 , i 2 , , i u is the set of test users, j ϵ J = j 1 , j 2 , , j g is the set of predicted classes, k ϵ K = k 1 , k 2 , , k g is the set of actual classes, u is the total number of test users, and g is the number of classes.
We identified five machine-learning metrics that evaluate the entire HGR model. The first one is accuracy, which is the fraction of gestures recognized correctly among all the test data. Second, the recall is the fraction of gestures recognized correctly for a class among the test data of this class. Third, the precision is the fraction of gestures recognized correctly of a class among the gestures recognized by the HGR model as this class. Fourth, the standard deviation of the accuracy per user is the amount of dispersion of the recognition accuracies per user. Finally, the standard deviation of the accuracy per class is the amount of dispersion of the recalls of a particular model.
These metrics can produce biased results for two reasons: an incorrect definition of a true positive, and an unbalanced test. In order to determine the recognition accuracy, a gesture is considered as a true positive (i.e., the gesture is recognized correctly) when the HGR model determines what gesture was performed and when this gesture was performed by a person. However, only SPS 51 is evaluated in this way. Eleven HGR models (SPS 2, SPS 5, SPS 6, SPS 7, SPS 8, SPS 9, SPS 19, SPS 20, SPS 34, SPS 35, and SPS 36) determine the classification accuracy because they only took into consideration what gesture was performed by a person as a true positive, and the remaining models do not show what they consider a true positive.
In addition, the test set is balanced when it has the same number of samples per class and the same number of samples per user (see Table 13). For example, if an HGR model is evaluated using a set that has more data for the user A, the accuracy of this model and the accuracy of the user A tend to be the same.
There are five SPS (SPS 2, SPS 5, SPS 8, SPS 9, and SPS 18) in which the evaluation was performed with data acquired without feedback (i.e., the correctness of classification was not provided in the evaluation), thus people cannot adjust their movements to the HGR model. Eight SPS were performed with data acquired with feedback from the HGR model (SPS 1, SPS 4, SPS 11, SPS 12, SPS 13, SPS 17, SPS 20, and SPS 29), and the remaining SPS do not indicate information about feedback.
Table 13 shows the recognition accuracies, the number of people who participated in the evaluation, type of data set (i.e., balanced or unbalanced), and the use of Cross-Validation by the 56 HGR models. The largest number of people is 80 (SPS 23). Three HGR models were evaluated using EMG data from amputees (SPS 6, SPS 21, and SPS 48). Moreover, 19 HGR models use cross-validation, that is, a technique used to minimize the probability of biased results in small data sets (see Table 13).

3.5.2. HGR Models Evaluated Using Target Achievement Tests (from SPS 57 to SPS 65)

These nine HGR models used three target achievement tests, including the motion test (SPS 60), target achievement control test (TAC) (SPS 60, SPS 63, and SPS 65), and Fitts’ law test (FLT) (SPS 59, SPS 61, SPS 62, SPS 64, and SPS 65). These three tests used ten metrics, such as throughput (SPS 57, SPS 58, SPS 59, SPS 61, SPS 62, SPS 64, and SPS 65), path efficiency (SPS 57, SPS 58, SPS 59, SPS 60, SPS 61, SPS 62, SPS 64, and SPS 65), overshoot (SPS 57, SPS 58, SPS 59, SPS 61, SPS 62, SPS 64, and SPS 65), average speed (SPS 57), completion rate (SPS 57, SPS 58, SPS 60, SPS 61, SPS 63, SPS 64, and SPS 65), stopping distance (SPS 58), completion time (SPS 60, SPS 63, and SPS 65), real-time accuracy (SPS 60), length error (SPS 63), and reaction time (SPS 64) (see Table 14).
A motion test was proposed by patients with targeted muscle reinnervation to evaluate the myoelectric capacity [128]. These patients should maintain a gesture until the HGR model has made a predetermined number of correct predictions. In TAC, the patients control a virtual prosthesis to obtain a target for a dwell time, which is generally 1 s [50]. These patients have a trial time to get the target, which is generally 15 s. FLT is a similar test to TAC, but the users control a circular cursor with two or three degrees of freedom. FLT states that there is a trade-off between speed and accuracy [51,108], which is defined by:
M T = a + b I D
where M T is the movement time, a and b are empirical constants, and I D is the index of difficulty (ID) of a target (see Equation (23)), which is calculated using the distance (D) from an initial point to a target, and the width (W) of the target. Throughput is a metric proposed by Fitts, which is the ratio between the ID and MT (see Equation (24)), to summarizes the performance of a control system. The results of FLT are reliable when this test combines a variety of IDs [129].
I D = log 2 ( D W + 1 )
T P = I D M T
The people who participated in these tests received feedback (i.e., the correctness of classification was provided in the evaluation). Four out of these nine HGR models were evaluated with four amputees (SPS 63), two amputees (SPS 59, and SPS 64), and one amputee (SPS 65).
In order to achieve concluding results, it is necessary to consider the sample size, which is the number of people who participated in the evaluation ( n 1 ) (see Table 11) times the number of gestures per person ( n 2 ) (see Table 13), to allow us to obtain statistically significant results. Using the typical values of a statistical hypothesis test (confidence level of 95%, margin of error of 5%, and population portion of 50%), we estimated n 1 according to the Normal Distribution using the Central Limit Theorem (25), and n 2 according to the Hoeffding’s inequality (26), which is widely used in machine learning theory.
n 1 z 2 p ( 1 p ) ϵ 2 = 1.96 2 0.5 ( 1 0.5 ) 0.05 2 385
n 2 l n ( 1 α ) 2 2 ϵ 2 = l n ( 1 0.95 ) 2 2 0.05 2 738 ,
where, z is the critical value of the normal distribution for a confidence level of 95%, ϵ is the margin of error, p is the population portion, and α is the confidence level. Therefore, the sample size ( n 1 n 2 ) gestures of the test set must be in the order of hundreds of thousands. None of the works present so far considered these values to achieve a significant result. In the scientific literature, many EMG data sets are available [130], but, according to the best of our knowledge, the data set with the higher n 1 is 30 [131], and with the higher n 2 is 40 [84,132].

4. Conclusions

This SLR analyzes works that propose HGR models using surface EMG and ML. Following the Kitchenham methodology, we introduced four RQs based on the main goal of this SLR, which was to analyze the state-of-the-art of these models. To answer these four RQs, we presented, analyzed, and discussed the data extracted from 65 selected primary studies. Below are our findings in regard to the four RQs.
Structure: The structure of the models studied varies from one work to the other. However, we were able to examine the structure of these models using a structure composed of six stages: data acquisition, segmentation, preprocessing, feature extraction, classification, and postprocessing. Under this standard structure, we studied the types of HGR models, the number of EMG sensors, the sampling rate, sensors, segmentation and preprocessing techniques, extracted features, the domain of the extracted features, and the ML algorithm. The most used structure is: eight EMG sensors, a sampling rate between 200 Hz and 1000 Hz, overlapping sliding windowing, filtering (segmentation), mean absolute value (feature extraction), support vector machines, and feedforward neural networks (classification).
Controller delay and hardware: The controller delay of gesture recognition models is the sum of two values: data collection time (DCT) and data analysis time (DAT). A recognition model works in real-time when this sum is less than an optimal controller delay. However, the works analyzed report several optimal controller delays for different applications, suggesting that the optimal controller delay is relative to the user perception and the application of a recognition model.
Number and types of gestures recognized: The 65 works analyzed propose models that recognize different number and types of gestures: 31 works took into consideration the rest gesture as a class to be recognized; only one model recognized both static and dynamic gestures; and the remaining models recognized static gestures only. No model recognized dynamic gestures only as most of the EMG data generated by dynamic gestures are in the transient state. Recognizing gestures using EMG data in the transient state is more complex than in the steady state because the latter behaves as a non-stationary process. The classification of the hand gestures using EMG data in the steady state is more accurate than in the transient state, and only nine works recognized short-term gestures (i.e., using EMG data in the transient state).
Metrics and results: We divided the SPS according to the types of evaluation, which are machine-learning metrics and target achievement tests. 56 SPS evaluated their models using machine learning metrics. We found 13 machine-learning metrics and three target achievement tests. The training and testing protocols vary among the works making the comparison of their performance very difficult. Moreover, taking into consideration that many works do not describe these protocols and the whole structure of the model, one key point is the significance and reproducibility of the results. Using the normal distribution for the number of people, and the Hoeffding’s inequality for the number of gestures per person, we estimated that the sample size of the test set must be in the order of the hundreds of thousands to obtain a result with a confidence level of 95% and a precision of 5%. None of the works analyzed utilize a test set of this magnitude, and therefore the confidence and reproducibility of their results are questionable. Based on the definition a true positive, only one out of the HGR models, which used machine-learning metrics, was evaluated using the recognition accuracy; the remaining models were evaluated using classification accuracy as they only took into consideration what gesture was performed by a person as a true positive.

5. Future Work

Based on this SLR, we identify the possible future works in this field:
  • Research the optimal permitted delay to determine a general criterion of real-time processing in HGR models using EMG and ML.
  • Develop models using EMG and ML to recognize gestures of long and short duration. Therefore, these models must be able to recognize gestures using EMG data in the transient and steady states.
  • Develop evaluation methods for the HGR models using EMG and ML that state the test sets, metrics, and protocol of evaluation.
  • Develop general HGR models using EMG and ML that can be used by people who do not participate in the training of these models.
  • Develop recognition models that not only recognize one gesture but a sequence of movements.

Funding

This research was funded by Escuela Politécnica Nacional through the research project PIJ-16-13.

Acknowledgments

The authors gratefully acknowledge the financial support provided by Escuela Politécnica Nacional for the development of the research project PIJ-16-13. We also thank Marco Segura, Carlos Anchundia, Patricio Zambrano, and Jonathan Zea for comments that greatly improved this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CLClassification
DAData Acquisition
DATData Analysis Time
DCTData Collection Time
DGDuration of the Gestures
EMGSurface Electromyography
FEFeature Extraction
FLTFitts’ Law Test
HCIHuman–Computer Interaction
HGRHand Gesture recognition
IDIndex of Difficulty
IMUInertial Measurement Unit
MLMachine Learning
MMDCMathematical Model for a Dynamic Contraction
MMSCMathematical Model for a Static Contraction
MTMovement Time
MUMotor Unit
MUAPMotor Unit Action Potential
NGpPENumber of Gesture per Person in the Evaluation Set
NGpPTNumber of Gestures per Person in the Training Set
NPTNumber of People who Participated in the Training
POSTPPostprocessing
PREPPreprocessing
SEGMSegmentation
SLRSystematic Literature Review
SPSSelected Primary Study
TACTarget Achievement Control Test

References

  1. Shi, W.T.; Lyu, Z.J.; Tang, S.T.; Chia, T.L.; Yang, C.Y. A bionic hand controlled by hand gesture recognition based on surface EMG signals: A preliminary study. Biocybern. Biomed. Eng. 2018, 38, 126–135. [Google Scholar] [CrossRef]
  2. Tavakoli, M.; Benussi, C.; Lourenco, J.L. Single channel surface EMG control of advanced prosthetic hands: A simple, low cost and efficient approach. Expert Syst. Appl. 2017, 79, 322–332. [Google Scholar] [CrossRef]
  3. Wang, N.; Lao, K.; Zhang, X. Design and Myoelectric Control of an Anthropomorphic Prosthetic Hand. J. Bionic Eng. 2017, 14, 47–59. [Google Scholar] [CrossRef]
  4. Islam, M.M.; Siddiqua, S.; Afnan, J. Real Time Hand Gesture Recognition Using Different Algorithms Based on American Sign Language. In Proceedings of the IEEE International Conference on Imaging, Vision & Pattern Recognition (icIVPR 2017), Dhaka, Bangladesh, 13–14 February 2017; pp. 1–6. [Google Scholar] [CrossRef]
  5. Savur, C.; Sahin, F. Real-Time American Sign Language Recognition System Using Surface EMG Signal. In Proceedings of the 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA, 9–11 December 2015; pp. 497–502. [Google Scholar] [CrossRef]
  6. Li, W.J.; Hsieh, C.Y.; Lin, L.F.; Chu, W.C. Hand Gesture Recognition for Post-Stroke Rehabilitation Using Leap Motion. In Proceedings of the 2017 International Conference on Applied System Innovation (ICASI), Sapporo, Japan, 13–17 May 2017; pp. 386–388. [Google Scholar]
  7. Nelson, A.; McCombe Waller, S.; Robucci, R.; Patel, C.; Banerjee, N. Evaluating touchless capacitive gesture recognition as an assistive device for upper extremity mobility impairment. J. Rehabil. Assist. Technol. Eng. 2018, 5, 1–13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Sarkar, A.; Patel, K.A.; Ram, R.G.; Capoor, G.K. Gesture Control of Drone Using a Motion Controller. In Proceedings of the 2016 International Conference on Industrial Informatics and Computer Systems (CIICS), Sharjah, Dubai, United Arab Emirates, 13–15 March 2016; pp. 1–5. [Google Scholar]
  9. Estrada, L.A.; Benalcázar, M.E.; Sotomayor, N. Gesture Recognition and Machine Learning Applied to Sign Language Translation. In Proceedings of the VII Latin American Congress on Biomedical Engineering CLAIB 2016, Bucaramanga, Santander, Colombia, 26–28 October 2016; Springer: Berlin/Heidelberg, Germany, 2017; pp. 233–236. [Google Scholar]
  10. Pisharady, P.K.; Saerbeck, M. Recent methods and databases in vision-based hand gesture recognition: A review. Comput. Vis. Image Underst. 2015, 141, 152–165. [Google Scholar] [CrossRef]
  11. Moschetti, A.; Fiorini, L.; Esposito, D.; Dario, P.; Cavallo, F. Recognition of daily gestures with wearable inertial rings and bracelets. Sensors 2016, 16, 1341. [Google Scholar] [CrossRef] [Green Version]
  12. Zhang, X.; Chen, X.; Wang, W.H.; Yang, J.H.; Lantz, V.; Wang, K.Q. Hand Gesture Recognition and Virtual Game Control Based on 3D Accelerometer and EMG Sensors. In Proceedings of the 14th International Conference on Intelligent User Interfaces, Sanibel Island, Fl, USA, 8–11 February 2009; ACM: New York, NY, USA, 2009; pp. 401–406. [Google Scholar]
  13. El-Sheimy, N.; Nassar, S.; Noureldin, A. Wavelet de-noising for IMU alignment. IEEE Aerosp. Electron. Syst. Mag. 2004, 19, 32–39. [Google Scholar] [CrossRef]
  14. De Luca, C.J.; Gilmore, L.D.; Kuznetsov, M.; Roy, S.H. Filtering the surface EMG signal: Movement artifact and baseline noise contamination. J. Biomech. 2010, 43, 1573–1579. [Google Scholar] [CrossRef]
  15. Weiss, L.D.; Weiss, J.M.; Silver, J.K. Easy EMG E-Book: A Guide to Performing Nerve Conduction Studies and Electromyography; Elsevier Health Sciences: Amsterdam, The Netherlands, 2015. [Google Scholar]
  16. Rodriguez-Falces, J.; Navallas, J.; Malanda, A. EMG Modeling. In Computational Intelligence in Electromyography Analysis-A Perspective on Current Applications and Future Challenges; InTechOpen: London, UK, 2012. [Google Scholar]
  17. McGill, K. Surface electromyogram signal modelling. Med Biol. Eng. Comput. 2004, 42, 446–454. [Google Scholar] [CrossRef]
  18. De Luca, C.J. A model for a motor unit train recorded during constant force isometric contractions. Biol. Cybern. 1975, 19, 159–167. [Google Scholar] [CrossRef] [Green Version]
  19. Hogan, N.; Mann, R.W. Myoelectric Signal Processing: Optimal Estimation Derivation of the Optimal Myoprocessor. IEEE Trans. Biomed. Eng. 1980, 27, 382–395. [Google Scholar] [CrossRef] [PubMed]
  20. Shwedyk, E.; Balasubramanian, R.; Scott, R. A nonstationary model for the electromyogram. IEEE Trans. Biomed. Eng. 1977, 24, 417–424. [Google Scholar] [CrossRef] [PubMed]
  21. Sugiyama, M.; Kawanabe, M. Machine Learning in Non-Stationary Environments: Introduction to Covariate Shift Adaptation; MIT Press: Cambridge, MA, USA, 2012. [Google Scholar]
  22. Sugiyama, M.; Yamada, M.; du Plessis, M.C. Learning under nonstationarity: Covariate shift and class-balance change. Wiley Interdiscip. Rev. Comput. Stat. 2013, 5, 465–477. [Google Scholar] [CrossRef]
  23. Merletti, R.; Conte, L.R.L. Surface EMG processingduring isometric contractions. J. Electromyogr. Kinesiol. 1997, 7, 241–250. [Google Scholar] [CrossRef]
  24. Farina, D.; Jiang, N.; Rehbaum, H.; Holobar, A.; Graimann, B.; Dietl, H.; Aszmann, O.C. The extraction of neural information from the surface EMG for the control of upper-limb prostheses: Emerging avenues and challenges. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 797–809. [Google Scholar] [CrossRef]
  25. Oskoei, M.A.; Hu, H. Myoelectric control systems-A survey. Biomed. Signal Process. Control 2007, 2, 275–294. [Google Scholar] [CrossRef]
  26. Parker, P.; Englehart, K.; Hudgins, B. Control of Powered Upper Limb Prostheses. In Electromyography: Physiology, Engineering, and Noninvasive Applications; Wiley: Hoboken, NJ, USA, 2004; pp. 453–475. [Google Scholar]
  27. Itou, T.; Terao, M.; Nagata, J.; Yoshida, M. Mouse Cursor Control System Using EMG. In Proceedings of the 2001 Conference Proceedings of the 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Istanbul, Turkey, 25–28 October 2001; Volume 2, pp. 1368–1369. [Google Scholar]
  28. Aszmann, O.; Dietl, H.; Frey, M. Selective nerve transfers to improve the control of myoelectrical arm prostheses. Handchir. Mikrochir. Plast. Chir. 2008, 40, 60–65. [Google Scholar] [CrossRef]
  29. Kuiken, T. Targeted reinnervation for improved prosthetic function. Phys. Med. Rehabil. Clin. 2006, 17, 1–13. [Google Scholar] [CrossRef]
  30. Williams, T.; Meier, R.; Atkins, D. Control of powered upper extremity prostheses. Funct. Restor. Adults Child. Up. Extrem. Amputation 2004, 207, 224. [Google Scholar]
  31. Young, A.J.; Smith, L.H.; Rouse, E.J.; Hargrove, L.J. A comparison of the real-time controllability of pattern recognition to conventional myoelectric control for discrete and simultaneous movements. J. Neuroeng. Rehabil. 2014, 11, 5. [Google Scholar] [CrossRef] [Green Version]
  32. Farina, D.; Holobar, A.; Merletti, R.; Enoka, R.M. Decoding the neural drive to muscles from the surface electromyogram. Clin. Neurophysiol. 2010, 121, 1616–1623. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Dhillon, G.S.; Horch, K.W. Direct neural sensory feedback and control of a prosthetic arm. IEEE Trans. Neural Syst. Rehabil. Eng. 2005, 13, 468–472. [Google Scholar] [CrossRef] [PubMed]
  34. Glaser, V.; Holobar, A.; Zazula, D. Real-time motor unit identification from high-density surface EMG. IEEE Trans. Neural Syst. Rehabil. Eng. 2013, 21, 949–958. [Google Scholar] [CrossRef] [PubMed]
  35. Gazzoni, M.; Farina, D.; Merletti, R. A new method for the extraction and classification of single motor unit action potentials from surface EMG signals. J. Neurosci. Methods 2004, 136, 165–177. [Google Scholar] [CrossRef]
  36. Miller, R.B. Response Time in Man-Computer Conversational Transactions. In Proceedings of the Fall Joint Computer Conference, New York, NY, USA, 9–11 December 1968; pp. 267–277. [Google Scholar] [CrossRef]
  37. Card, S.K.; Moran, T.P.; Newell, A. The Psychology of Human-Computer Interaction; CRC Press: Boca Raton, FL, USA, 1983. [Google Scholar]
  38. Hudgins, B.; Parker, P.; Scott, R.N. A new strategy for multifunction myoelectric control. IEEE Trans. Biomed. Eng. 1993, 40, 82–94. [Google Scholar] [CrossRef]
  39. Englehart, K.; Hudgins, B. A Robust, Real-Time Control Scheme for Multifunction Myoelectric Control. IEEE Trans. Biomed. Eng. 2003, 50, 848–854. [Google Scholar] [CrossRef]
  40. Englehart, K.; Hudgins, B.; Parker, P.A. A wavelet-based continuous classification scheme for multifunction myoelectric control. IEEE Trans. Biomed. Eng. 2001, 48, 302–311. [Google Scholar] [CrossRef]
  41. Graupe, D.; Kohn, K.H.; Kralj, A.; Basseas, S. Patient controlled electrical stimulation via EMG signature discrimination for providing certain paraplegics with primitive walking functions. J. Biomed. Eng. 1983, 5, 220–226. [Google Scholar] [CrossRef]
  42. Farrel, T.R.; Weir, R.F. The Optimal Controller Delay for Myoelectric Prostheses. IEEE Trans. Neural Syst. Rehabil. Eng. 2007, 15, 111–118. [Google Scholar] [CrossRef]
  43. Jiang, N.; Rehbaum, H.; Vujaklija, I.; Graimann, B.; Farina, D. Intuitive, online, simultaneous, and proportional myoelectric control over two degrees-of-freedom in upper limb amputees. IEEE Trans. Neural Syst. Rehabil. Eng. 2013, 22, 501–510. [Google Scholar] [CrossRef]
  44. Ortiz-Catalan, M.; Rouhani, F.; Brånemark, R.; Håkansson, B. Offline Accuracy: A Potentially Misleading Metric in Myoelectric Pattern Recognition for Prosthetic Control. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 1140–1143. [Google Scholar]
  45. Vujaklija, I.; Roche, A.D.; Hasenoehrl, T.; Sturma, A.; Amsuess, S.; Farina, D.; Aszmann, O.C. Translating research on myoelectric control into clinics—Are the performance assessment methods adequate? Front. Neurorobotics 2017, 11, 7. [Google Scholar] [CrossRef] [PubMed]
  46. Gusman, J.; Mastinu, E.; Ortiz-Catalán, M. Evaluation of computer-based target achievement tests for myoelectric control. IEEE J. Transl. Eng. Health Med. 2017, 5, 1–10. [Google Scholar] [CrossRef] [PubMed]
  47. Lock, B.; Englehart, K.; Hudgins, B. Real-Time Myoelectric Control in a Virtual Environment to Relate Usability vs. Accuracy. In Proceedings of the MyoElectric Controls Symposium, Fredericton, NB, Canada, 17–19 August 2005; pp. 122–127. [Google Scholar]
  48. Hargrove, L.; Losier, Y.; Lock, B.; Englehart, K.; Hudgins, B. A Real-Time Pattern Recognition Based Myoelectric Control Usability Study Implemented in a Virtual Environment. In Proceedings of the 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 23–26 August 2007; pp. 4842–4845. [Google Scholar]
  49. Mathiowetz, V.; Volland, G.; Kashman, N.; Weber, K. Adult norms for the Box and Block Test of manual dexterity. Am. J. Occup. Ther. 1985, 39, 386–391. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Simon, A.M.; Hargrove, L.J.; Lock, B.A.; Kuiken, T.A. The target achievement control test: Evaluating real-time myoelectric pattern recognition control of a multifunctional upper-limb prosthesis. J. Rehabil. Res. Dev. 2011, 48, 619. [Google Scholar] [CrossRef]
  51. Fitts, P.M. The information capacity of the human motor system in controlling the amplitude of movement. J. Exp. Psychol. 1954, 47, 381. [Google Scholar] [CrossRef] [Green Version]
  52. Kitchenham, B. Procedures for Performing Systematic Reviews; Keele Universuty: Keele, UK, 2004; Volume 33, pp. 1–26. [Google Scholar]
  53. Kitchenham, B.; Brereton, O.P.; Budgen, D.; Turner, M.; Bailey, J.; Linkman, S. Systematic literature reviews in software engineering–a systematic literature review. Inf. Softw. Technol. 2009, 51, 7–15. [Google Scholar] [CrossRef]
  54. Wohlin, C. Guidelines for Snowballing in Systematic Literature Studies and a Replication in Software Engineering. In Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering, London, UK, 13–14 May 2014; p. 38. [Google Scholar]
  55. Motoche, C.; Benalcázar, M.E. Real-Time Hand Gesture Recognition Based on Electromyographic Signals and Artificial Neural Networks. In Proceedings of the 27th International Conference on Artificial Neural Networks, Rhodes, Greece, 4–7 October 2018; Volume 8131, pp. 352–361. [Google Scholar] [CrossRef]
  56. Yang, J.; Pan, J.; Li, J. SEMG-Based Continuous Hand Gesture Recognition Using GMM-HMM and Threshold Model. In Proceedings of the 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), Parisian Macao, China, 5–8 December 2017; pp. 1509–1514. [Google Scholar] [CrossRef]
  57. Redrovan, D.V.; Kim, D. Hand Gestures Recognition Using Machine Learning for Control of Multiple Quadrotors. In Proceedings of the 2018 IEEE Sensors Applications Symposium (SAS), Seoul, Korea, 12–14 March 2018. [Google Scholar] [CrossRef]
  58. Yang, C.; Long, J.; Urbin, M.A.; Feng, Y.; Song, G.; Weng, J.; Li, Z. Real-Time Myocontrol of a Human-Computer Interface by Paretic Muscles after Stroke. IEEE Trans. Cogn. Dev. Syst. 2018, 10, 1126–1132. [Google Scholar] [CrossRef]
  59. Malešević, N.; Marković, D.; Kanitz, G.; Controzzi, M.; Cipriani, C.; Antfolk, C. Vector Autoregressive Hierarchical Hidden Markov Models (VARHHMM) for extracting finger movements using multichannel surface EMG signals. Complexity 2018, 2018, 9728264. [Google Scholar] [CrossRef] [Green Version]
  60. Kerber, F.; Puhl, M.; Krüger, A. User-Independent Real-Time Hand Gesture Recognition Based on Surface Electromyography. In Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services, Vienna, Austria, 4–7 September 2017; p. 36. [Google Scholar]
  61. Benalcázar, M.E.; Jaramillo, A.G.; Zea, A.; Páez, A.; Andaluz, V.H. Hand Gesture Recognition Using Machine Learning and the Myo Armband. In Proceedings of the 25th European Signal Processing Conference (EUSIPCO), Kos Island, Greece, 28 August–2 September 2017; pp. 1040–1044. [Google Scholar] [CrossRef] [Green Version]
  62. Benalcázar, M.E.; Motoche, C.; Zea, J.A.; Jaramillo, A.G.; Anchundia, C.E.; Zambrano, P.; Segura, M.; Palacios, F.B.; Pérez, M. Real-Time Hand Gesture Recognition Using the Myo Armband and Muscle Activity Detection. In Proceedings of the IEEE Second Ecuador Technical Chapters Meeting (ETCM), Salinas, Ecuador, 16–20 October 2017; pp. 1–6. [Google Scholar]
  63. Benatti, S.; Rovere, G.; Bösser, J.; Montagna, F.; Farella, E.; Glaser, H.; Schönle, P.; Burger, T.; Fateh, S.; Huang, Q.; et al. A sub-10mW Real-Time Implementation for EMG Hand Gesture Recognition Based on a Multi-Core Biomedical SoC. In Proceedings of the 7th IEEE International Workshop on Advances in Sensors and Interfaces (IWASI), Vieste, Italy, 15–16 June 2017; pp. 139–144. [Google Scholar] [CrossRef] [Green Version]
  64. Lian, K.Y.; Chiu, C.C.; Hong, Y.J.; Sung, W.T. Wearable Armband for Real Time Hand Gesture Recognition. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; pp. 2992–2995. [Google Scholar] [CrossRef]
  65. Donovan, I.M.; Puchin, J.; Okada, K.; Zhang, X. Simple Space-Domain Features for Low-Resolution sEMG Pattern Recognition. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Seogwipo, Korea, 11–15 July 2017; pp. 62–65. [Google Scholar] [CrossRef]
  66. Wu, Z.; Li, X. A Wireless Surface EMG Acquisition and Gesture Recognition System. In Proceedings of the 9th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Datong, China, 15–17 October 2016; pp. 1675–1679. [Google Scholar] [CrossRef]
  67. Liu, X.; Zhang, M.; Richardson, A.; Lucas, T.; Van Der Spiegel, J. The Virtual Trackpad: An Electromyography-based, Wireless, Real-time, Low-Power, Embedded Hand Gesture Recognition System using an Event-driven Artificial Neural Network. IEEE Trans. Circuits Syst. II Express Briefsiefs 2016, 64, 1257–11261. [Google Scholar] [CrossRef]
  68. Luh, G.C.; Ma, Y.H.; Yen, C.J.; Lin, H.A. Muscle-Gesture Robot Hand Control Based on sEMG Signals with Wavelet Transform Features and Neural Network Classifier. In Proceedings of the 2016 International Conference on Machine Learning and Cybernetics (ICMLC), Jeju, Korea, 10–13 July 2016; Volume 2, pp. 627–632. [Google Scholar] [CrossRef]
  69. Abreu, J.G.; Teixeira, J.M.; Figueiredo, L.S.; Teichrieb, V. Evaluating Sign Language Recognition Using the Myo Armband. In Proceedings of the XVIII Symposium on Virtual and Augmented Reality (SVR), Gramado, Brazil, 21–24 June 2016; pp. 64–70. [Google Scholar] [CrossRef]
  70. Boyali, A.; Hashimoto, N. Spectral Collaborative Representation based Classification for hand gestures recognition on electromyography signals. Biomed. Signal Process. Control 2016, 24, 11–18. [Google Scholar] [CrossRef] [Green Version]
  71. Allard, U.C.; Nougarou, F.; Fall, C.L.; Giguère, P.; Gosselin, C.; Laviolette, F.; Gosselin, B. A Convolutional Neural Network for Robotic Arm Guidance Using sEMG Based Frequency-Features. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 464–2470. [Google Scholar] [CrossRef]
  72. Huang, H.; Li, T.; Bruschini, C.; Enz, C.; Koch, V.M.; Justiz, J.; Antfolk, C. EMG Pattern Recognition Using Decomposition Techniques for Constructing Multiclass Classifiers. In Proceedings of the 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob), Singapore, 26–29 June 2016; pp. 1296–1301. [Google Scholar] [CrossRef]
  73. Shafivulla, M. SEMG Based Human Computer Interface for Physically Challenged Patients. In Proceedings of the 2016 International Conference on Advances in Human Machine Interaction (HMI), Doddaballapur, Bengaluru, Karnataka, India, 3–5 March 2016; pp. 1–4. [Google Scholar]
  74. Kakoty, N.M.; Hazarika, S.M.; Gan, J.Q. EMG Feature Set Selection Through Linear Relationship for Grasp Recognition. J. Med Biol. Eng. 2016, 36, 883–890. [Google Scholar] [CrossRef]
  75. Wang, J.; Ren, H.; Chen, W.; Zhang, P. A Portable Artificial Robotic Hand Controlled by EMG Signal Using ANN Classifier. In Proceedings of the 2015 IEEE International Conference on Information and Automation (ICMA), Beijing, China, 2–5 August 2015; pp. 2709–2714. [Google Scholar] [CrossRef]
  76. Mane, S.M.; Kambli, R.A.; Kazi, F.S.; Singh, N.M. Hand motion recognition from single channel surface EMG using wavelet & artificial neural network. Procedia Comput. Sci. 2015, 49, 58–65. [Google Scholar] [CrossRef] [Green Version]
  77. Benatti, S.; Casamassima, F.; Milosevic, B.; Farella, E.; Schönle, P.; Fateh, S.; Burger, T.; Huang, Q.; Benini, L. A Versatile Embedded Platform for EMG Acquisition and Gesture Recognition. IEEE Trans. Biomed. Circuits Syst. 2015, 9, 620–630. [Google Scholar] [CrossRef] [PubMed]
  78. Rossi, M.; Benatti, S.; Farella, E.; Benini, L. Hybrid EMG Classifier Based on HMM and SVM for Hand Gesture Recognition in Prosthetics. In Proceedings of the 2015 IEEE International Conference on Industrial Technology (ICIT), Seville, Spain, 17–19 March 2015; pp. 1700–1705. [Google Scholar] [CrossRef]
  79. Li, H.; Chen, X.; Li, P. Human-computer Interaction System Design Based on Surface EMG Signals. In Proceedings of the 2014 International Conference on Modelling, Identification & Control, Marina Bay Sands, Singapore, 10–12 December 2014; pp. 94–98. [Google Scholar]
  80. Benatti, S.; Farella, E. Towards EMG Control Interface for Smart Garments. In Proceedings of the 18th ACM International Symposium on Wearable Computers: Adjunct Program, Seattle, WA, USA, 13–17 September 2014; pp. 163–170. [Google Scholar]
  81. Villarejo, J.J.; Costa, R.M.; Bastos, T.; Frizera, A. Identification of Low Level sEMG Signals for Individual Finger Prosthesis. In Proceedings of the 5th ISSNIP-IEEE Biosignals and Biorobotics Conference (2014): Biosignals and Robotics for Better and Safer Living (BRC), Salvador, Brazil, 26–28 May 2014; pp. 1–6. [Google Scholar] [CrossRef]
  82. Al Omari, F.; Hui, J.; Mei, C.; Liu, G. Pattern Recognition of Eight Hand Motions Using Feature Extraction of Forearm EMG Signal. Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. 2014, 84, 473–480. [Google Scholar] [CrossRef]
  83. Chen, X.; Wang, Z.J. Pattern recognition of number gestures based on a wireless surface EMG system. Biomed. Signal Process. Control 2013, 8, 184–192. [Google Scholar] [CrossRef]
  84. Côté-Allard, U.; Fall, C.L.; Drouin, A.; Campeau-Lecours, A.; Gosselin, C.; Glette, K.; Laviolette, F.; Gosselin, B. Deep learning for electromyographic hand gesture signal classification using transfer learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 760–771. [Google Scholar] [CrossRef] [Green Version]
  85. Chung, E.A.; Benalcázar, M.E. Real-Time Hand Gesture Recognition Model Using Deep Learning Techniques and EMG Signals. In Proceedings of the 27th European Signal Processing Conference (EUSIPCO), Coruña, Spain, 2–6 September 2019; pp. 1–5. [Google Scholar]
  86. Benalcázar, M.E.; Anchundia, C.E.; Zea, J.A.; Zambrano, P.; Jaramillo, A.G.; Segura, M. Real-Time Hand Gesture Recognition Based on Artificial Feed-Forward Neural Networks and EMG. In Proceedings of the 26th European Signal Processing Conference (EUSIPCO), Rome, Italy, 3–7 September 2018; pp. 1492–1496. [Google Scholar]
  87. Wang, J.; Tang, L.; Bronlund, J.E. Pattern Recognition-Based Real Time Myoelectric System for Robotic Hand Control. In Proceedings of the 14th IEEE Conference on Industrial Electronics and Applications (ICIEA), Hangzhou, China, 9–11 June 2019; pp. 1598–1605. [Google Scholar]
  88. Das, A.K.; Laxmi, V.; Kumar, S. Hand Gesture Recognition and Classification Technique in Real-Time. In Proceedings of the 2019 International Conference on Vision Towards Emerging Trends in Communication and Networking (ViTECoN), Tamil Nadu, India, 30–31 March 2019; pp. 1–5. [Google Scholar]
  89. Luo, X.Y.; Wu, X.Y.; Chen, L.; Hu, N.; Zhang, Y.; Zhao, Y.; Hu, L.T.; Yang, D.D.; Hou, W.S. Forearm Muscle Synergy Reducing Dimension of the Feature Matrix in Hand Gesture Recognition. In Proceedings of the 3rd International Conference on Advanced Robotics and Mechatronics (ICARM), Singapore, 18–20 July 2018; pp. 691–696. [Google Scholar]
  90. Raurale, S.; McAllister, J.; del Rincon, J.M. EMG Wrist-Hand Motion Recognition System for Real-Time Embedded Platform. In Proceedings of the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1523–1527. [Google Scholar]
  91. Zanghieri, M.; Benatti, S.; Burrello, A.; Kartsch, V.; Conti, F.; Benini, L. Robust Real-Time Embedded EMG Recognition Framework Using Temporal Convolutional Networks on a Multicore IoT Processor. IEEE Trans. Biomed. Circuits Syst. 2019. [Google Scholar] [CrossRef]
  92. Yang, Y.; Duan, F.; Ren, J.; Liu, Z.; Zhu, C.; Soo, Y.; Mun, K. A Multi-Gestures Recognition System Based on Less sEMG Sensors. In Proceedings of the 4th IEEE International Conference on Advanced Robotics and Mechatronics (ICARM), Osaka, Japan, 3–5 July 2019; pp. 105–110. [Google Scholar]
  93. Tam, S.; Boukadoum, M.; Campeau-Lecours, A.; Gosselin, B. A Fully Embedded Adaptive Real-Time Hand Gesture Classifier Leveraging HD-sEMG & Deep Learning. IEEE Trans. Biomed. Circuits Syst. 2019. [Google Scholar] [CrossRef]
  94. Yang, K.; Zhang, Z. Real-time Pattern Recognition for Hand Gesture Based on ANN and Surface EMG. In Proceedings of the 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, 24–26 May 2019; pp. 799–802. [Google Scholar]
  95. Donovan, I.M.; Okada, K.; Zhang, X. Adjacent Features for High-Density EMG Pattern Recognition. In Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, Hawaii, 17–21 July 2018; pp. 5978–5981. [Google Scholar]
  96. Neacsu, A.A.; Cioroiu, G.; Radoi, A.; Burileanu, C. Automatic EMG-based Hand Gesture Recognition System using Time-Domain Descriptors and Fully-Connected Neural Networks. In Proceedings of the 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, 1–3 July 2019; pp. 232–235. [Google Scholar]
  97. Schabron, B.; Alashqar, Z.; Fuhrman, N.; Jibbe, K.; Desai, J. Artificial Neural Network to Detect Human Hand Gestures for a Robotic Arm Control. In Proceedings of the 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 1662–1665. [Google Scholar]
  98. Pancholi, S.; Joshi, A.M. Electromyography-based hand gesture recognition system for upper limb amputees. IEEE Sens. Lett. 2019, 3, 1–4. [Google Scholar] [CrossRef]
  99. Tavakoli, M.; Benussi, C.; Lopes, P.A.; Osorio, L.B.; de Almeida, A.T. Robust hand gesture recognition with a double channel surface EMG wearable armband and SVM classifier. Biomed. Signal Process. Control 2018, 46, 121–130. [Google Scholar] [CrossRef]
  100. Peter, L.; Maryncak, F.; Proto, A.; Cerny, M. Fuzzy Classification of Hand’s Motion. IFAC-PapersOnLine 2018, 51, 354–359. [Google Scholar] [CrossRef]
  101. Simão, M.; Neto, P.; Gibaru, O. EMG-based online classification of gestures with recurrent neural networks. Pattern Recognit. Lett. 2019, 128, 45–51. [Google Scholar] [CrossRef]
  102. Hassan, H.F.; Abou-Loukh, S.J.; Ibraheem, I.K. Teleoperated robotic arm movement using electromyography signal with wearable Myo armband. arXiv 2018, arXiv:1810.09929. [Google Scholar] [CrossRef]
  103. Liang, S.; Wu, Y.; Chen, J.; Zhang, L.; Chen, P.; Chai, Z.; Cao, C. Identification of Gesture Based on Combination of Raw sEMG and sEMG Envelope Using Supervised Learning and Univariate Feature Selection. J. Bionic Eng. 2019, 16, 647–662. [Google Scholar] [CrossRef]
  104. Qi, J.; Jiang, G.; Li, G.; Sun, Y.; Tao, B. Surface EMG hand gesture recognition system based on PCA and GRNN. Neural Comput. Appl. 2019. [Google Scholar] [CrossRef]
  105. Mayor, J.J.V.; Costa, R.M.; Frizera Neto, A.; Bastos, T.F. Dexterous hand gestures recognition based on low-density sEMG signals for upper-limb forearm amputees. Res. Biomed. Eng. 2017, 33, 202–217. [Google Scholar] [CrossRef] [Green Version]
  106. Zhang, Z.; Yang, K.; Qian, J.; Zhang, L. Real-Time Surface EMG Pattern Recognition for Hand Gestures Based on an Artificial Neural Network. Sensors 2019, 19, 3170. [Google Scholar] [CrossRef] [Green Version]
  107. Kamavuako, E.N.; Scheme, E.J.; Englehart, K.B. On the usability of intramuscular EMG for prosthetic control: A Fitts’ Law approach. J. Electromyogr. Kinesiol. 2014, 24, 770–777. [Google Scholar] [CrossRef]
  108. Scheme, E.J.; Englehart, K.B. Validation of a selective ensemble-based classification scheme for myoelectric control using a three-dimensional Fitts’ law test. IEEE Trans. Neural Syst. Rehabil. Eng. 2012, 21, 616–623. [Google Scholar] [CrossRef]
  109. Ameri, A.; Kamavuako, E.N.; Scheme, E.J.; Englehart, K.B.; Parker, P.A. Support vector regression for improved real-time, simultaneous myoelectric control. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 1198–1209. [Google Scholar] [CrossRef]
  110. Ortiz-Catalan, M.; Håkansson, B.; Brånemark, R. Real-time and simultaneous control of artificial limbs based on pattern recognition algorithms. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 756–764. [Google Scholar] [CrossRef] [PubMed]
  111. Waris, A.; Mendez, I.; Englehart, K.; Jensen, W.; Kamavuako, E.N. On the robustness of real-time myoelectric control investigations: A multiday Fitts’ law approach. J. Neural Eng. 2019, 16, 026003. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  112. Ameri, A.; Akhaee, M.A.; Scheme, E.; Englehart, K. Regression convolutional neural network for improved simultaneous EMG control. J. Neural Eng. 2019, 16, 036015. [Google Scholar] [CrossRef] [PubMed]
  113. Wurth, S.M.; Hargrove, L.J. A real-time comparison between direct control, sequential pattern recognition control and simultaneous pattern recognition control using a Fitts’ law style assessment procedure. J. Neuroeng. Rehabil. 2014, 11, 91. [Google Scholar] [CrossRef] [Green Version]
  114. Clancy, E.; Morin, E.; Merletti, R. Sampling, noise-reduction and amplitude estimation issues in surface electromyography. J. Electromyogr. Kinesiol. 2002, 113, 1–16. [Google Scholar] [CrossRef]
  115. Li, G.; Li, Y.; Zhang, Z.; Geng, Y.; Zhou, R. Selection of Sampling Rate for EMG Pattern Recognition Based Prosthesis Control. In Proceedings of the 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, Buenos Aires, Argentina, 31 August–4 September 2010; Volume 2010, pp. 5058–5061. [Google Scholar] [CrossRef]
  116. Winter, D.A. Biomechanics and Motor Control of Human Movement; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  117. Li, X.; Zhou, P.; Aruin, A.S. Teager–Kaiser energy operation of surface EMG improves muscle activity onset detection. Ann. Biomed. Eng. 2007, 35, 1532–1538. [Google Scholar] [CrossRef]
  118. Toledo-Pérez, D.C.; Martínez-Prado, M.A.; Gómez-Loenzo, R.A.; Paredes-García, W.J.; Rodríguez-Reséndiz, J. A Study of Movement Classification of the Lower Limb Based on up to 4-EMG Channels. Electronics 2019, 8, 259. [Google Scholar] [CrossRef] [Green Version]
  119. Li, G.; Schultz, A.E.; Kuiken, T.A. Quantifying pattern recognition—Based myoelectric control of multifunctional transradial prostheses. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 185–192. [Google Scholar]
  120. Smith, L.H.; Hargrove, L.J.; Lock, B.A.; Kuiken, T.A. Determining the Optimal Window Length for Pattern Recognition-Based Myoelectric Control: Balancing the Competing Effects of Classification Error and Controller Delay. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 19, 186–192. [Google Scholar] [CrossRef] [Green Version]
  121. Chowdhury, R.; Reaz, M.; Ali, M.; Bakar, A.; Chellappan, K.; Chang, T. Surface electromyography signal processing and classification techniques. Sensors 2013, 13, 12431–12466. [Google Scholar] [CrossRef]
  122. Disselhorst-Klug, C.; Silny, J.; Rau, G. Improvement of spatial resolution in surface-EMG: A theoretical and experimental comparison of different spatial filters. IEEE Trans. Biomed. Eng. 1997, 44, 567–574. [Google Scholar] [CrossRef] [PubMed]
  123. Phinyomark, A.; N Khushaba, R.; Scheme, E. Feature extraction and selection for myoelectric control based on wearable EMG sensors. Sensors 2018, 18, 1615. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  124. Toledo-Pérez, D.C.; Rodríguez-Reséndiz, J.; Gómez-Loenzo, R.A.; Jauregui-Correa, J. Support Vector Machine-Based EMG Signal Classification Techniques: A Review. Appl. Sci. 2019, 9, 4402. [Google Scholar] [CrossRef] [Green Version]
  125. Nazmi, N.; Abdul Rahman, M.; Yamamoto, S.I.; Ahmad, S.; Zamzuri, H.; Mazlan, S. A review of classification techniques of EMG signals during isotonic and isometric contractions. Sensors 2016, 16, 1304. [Google Scholar] [CrossRef] [Green Version]
  126. Williams, M.R.; Kirsch, R.F. Evaluation of head orientation and neck muscle EMG signals as command inputs to a human–computer interface for individuals with high tetraplegia. IEEE Trans. Neural Syst. Rehabil. Eng. 2008, 16, 485–496. [Google Scholar] [CrossRef] [Green Version]
  127. Ortiz-Catalan, M.; Brånemark, R.; Håkansson, B. BioPatRec: A modular research platform for the control of artificial limbs based on pattern recognition algorithms. Source Code Biol. Med. 2013, 8, 11. [Google Scholar] [CrossRef] [Green Version]
  128. Kuiken, T.A.; Li, G.; Lock, B.A.; Lipschutz, R.D.; Miller, L.A.; Stubblefield, K.A.; Englehart, K.B. Targeted muscle reinnervation for real-time myoelectric control of multifunction artificial arms. JAMA 2009, 301, 619–628. [Google Scholar] [CrossRef] [Green Version]
  129. Soukoreff, R.W.; MacKenzie, I.S. Towards a standard for pointing device evaluation, perspectives on 27 years of Fitts’ law research in HCI. Int. J. Hum.-Comput. Stud. 2004, 61, 751–789. [Google Scholar] [CrossRef]
  130. Phinyomark, A.; Scheme, E. EMG pattern recognition in the era of big data and deep learning. Big Data Cogn. Comput. 2018, 2, 21. [Google Scholar] [CrossRef] [Green Version]
  131. Sapsanis, C.; Georgoulas, G.; Tzes, A.; Lymberopoulos, D. Improving EMG Based Classification of Basic Hand Movements Using EMD. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 5754–5757. [Google Scholar]
  132. Atzori, M.; Gijsberts, A.; Castellini, C.; Caputo, B.; Hager, A.G.M.; Elsig, S.; Giatsidis, G.; Bassetto, F.; Müller, H. Electromyography data for non-invasive naturally-controlled robotic hand prostheses. Sci. Data 2014, 1, 1–13. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The resulting primary studies after each action carried out in the two stages: search of primary studies and analysis of primary studies.
Figure 1. The resulting primary studies after each action carried out in the two stages: search of primary studies and analysis of primary studies.
Sensors 20 02467 g001
Figure 2. Number of the SPS published per (a) year and per (b) type of publication.
Figure 2. Number of the SPS published per (a) year and per (b) type of publication.
Sensors 20 02467 g002
Figure 3. The six stages of the standard structure of the SPS.
Figure 3. The six stages of the standard structure of the SPS.
Sensors 20 02467 g003
Figure 4. Segmentation of the EMG of a gesture using the three techniques: (a) gesture detection, (b) adjacent sliding windowing, and (c) overlapping sliding windowing.
Figure 4. Segmentation of the EMG of a gesture using the three techniques: (a) gesture detection, (b) adjacent sliding windowing, and (c) overlapping sliding windowing.
Sensors 20 02467 g004
Figure 5. The EMG data of a long-term peace gesture (most of the EMG data are in the steady state).
Figure 5. The EMG data of a long-term peace gesture (most of the EMG data are in the steady state).
Sensors 20 02467 g005
Figure 6. The EMG data of a short-term peace gesture (most of the EMG data are in the transient state).
Figure 6. The EMG data of a short-term peace gesture (most of the EMG data are in the transient state).
Sensors 20 02467 g006
Table 1. Search strings used to find primary studies.
Table 1. Search strings used to find primary studies.
IDSearch String
SS1“Electromyography” AND “Hand Gesture Recognition” AND “Real Time”
SS2“Electromyography” AND “Hand Gesture Recognition” AND “Real-Time”
SS3“Electromyography” AND “Hand Gesture Recognition” AND “Online”
SS4“Electromyography” AND “Hand Gesture Recognition” AND “On line”
SS5“Electromyography” AND “Hand Gesture Recognition” AND “On-line”
SS6“Electromyography” AND “Hand Gesture Recognition” AND “box and blocks”
SS7“Electromyography” AND “Hand Gesture Recognition” AND “target achievement control”
SS8“Electromyography” AND “Hand Gesture Recognition” AND “Fitts’ law”
SS9“EMG” AND “Hand Gesture Recognition” AND “Real Time”
SS10“EMG” AND “Hand Gesture Recognition” AND “Real-Time”
SS11“EMG” AND “Hand Gesture Recognition” AND “Online”
SS12“EMG” AND “Hand Gesture Recognition” AND “On line”
SS13“EMG” AND “Hand Gesture Recognition” AND “On-line”
SS14“EMG” AND “Hand Gesture Recognition” AND “box and blocks”
SS15“EMG” AND “Hand Gesture Recognition” AND “target achievement control”
SS16“EMG” AND “Hand Gesture Recognition” AND “Fitts’ law”
Table 2. Number of primary studies for each literature repository and search string.
Table 2. Number of primary studies for each literature repository and search string.
Literature RepositoriesSearch Strings (SS)
SS1SS2SS3SS4SS5SS6SS7SS8SS9SS10SS11SS12SS13SS14SS15SS16
IEEE Xplore6666713350503614546531212
ACM Digital Library3434513133333113132530688122
Science Direct34342525254141303030113333
Springer5252297775752999332222
Table 3. The identifier, title, and reference of the 65 selected primary studies (SPS) used in this SLR.
Table 3. The identifier, title, and reference of the 65 selected primary studies (SPS) used in this SLR.
ID SPSTitleType of Publication
SPS 1A Bionic Hand Controlled by Hand Gesture Recognition Based on Surface EMG Signals: A Preliminary Study  [1]Journal
SPS 2Real-Time Hand Gesture Recognition Based on Electromyographic Signals and Artificial Neural Networks  [55]Conference
SPS 3sEMG-Based Continuous Hand Gesture Recognition Using GMM-HMM and Threshold Model [56]Conference
SPS 4Hand Gestures Recognition Using Machine Learning for Control of Multiple Quadrotors [57]Symposium
SPS 5Real-Time Myocontrol of a Human–Computer Interface by Paretic Muscles After Stroke [58]Journal
SPS 6Decoding of Individual Finger Movements From Surface EMG Signals Using Vector Autoregressive Hierarchical Hidden Markov Models (VARHHMM) [59]Conference
SPS 7User-Independent Real-Time Hand Gesture Recognition Based on Surface Electromyography [60]Conference
SPS 8Hand Gesture Recognition Using Machine Learning and the Myo Armband [61]Conference
SPS 9Real-Time Hand Gesture Recognition Using the Myo Armband and Muscle Activity Detection [62]Conference
SPS 10A Sub-10 mW Real-Time Implementation for EMG Hand Gesture Recognition Based on a Multi-Core Biomedical SoC [63]Workshop
SPS 11Design and Myoelectric Control of an Anthropomorphic Prosthetic Hand [3]Journal
SPS 12Wearable Armband for Real Time Hand Gesture Recognition [64]Conference
SPS 13Simple Space-Domain Features for Low-Resolution sEMG Patternn Recognition [65]Conference
SPS 14A Wireless Surface EMG Acquisition and Gesture Recognition System [66]Congress
SPS 15Single Channel Surface EMG Control of Advanced Prosthetic Hands: A Simple, Low Cost and Efficient Approach [2]Journal
SPS 16The Virtual Trackpad: an Electromyography-Based, Wireless, Real-Time, Low-Power, Embedded Hand Gesture Recognition System Using an Event-Driven Artificial Neural Network [67]Journal
SPS 17Muscle-Gesture Robot Hand Control Based on sEMG Signals With Wavelet Transform Features and Neural Network classifier [68]Conference
SPS 18Evaluating Sign Language Recognition Using the Myo Armband [69]Symposium
SPS 19Spectral Collaborative Representation Based Classification for Hand Gestures Recognition on Electromyography Signals [70]Conference
SPS 20A Convolutional Neural Network for Robotic Arm Guidance Using sEMG Based Frequency-Features [71]Conference
SPS 21EMG Pattern Recognition Using Decomposition Techniques for Constructing Multiclass Classifier [72]Conference
SPS 22SEMG Based Human Computer Interface for Physically Challenged Patients [73]Conference
SPS 23EMG Feature Set Selection Through Linear Relationship for Grasp Recognition [74]Journal
SPS 24A Portable Artificial Robotic Hand Controlled by EMG Signal Using ANN Classifier [75]Conference
SPS 25Real-Time American Sign Language Recognition System by Using Surface EMG Signal [5]Conference
SPS 26Hand Motion Recognition From Single Channel Surface EMG Using Wavelet & Artificial Neural Network [76]Conference
SPS 27A Versatile Embedded Platform for EMG Acquisition and Gesture Recognition [77]Journal
SPS 28Hybrid EMG classifier Based on HMM and SVM for Hand Gesture Recognition in Prosthetics [78]Conference
SPS 29Human–Computer Interaction System Design Based on Surface EMG Signals [79]Conference
SPS 30Towards EMG Control Interface for Smart Garments [80]Symposium
SPS 31Identification of Low Level sEMG Signals for Individual Finger Prosthesis [81]Conference
SPS 32Pattern Recognition of Eight Hand Motions Using Feature Extraction of Forearm EMG Signal [82]Journal
SPS 33Pattern Recognition of Number Gestures Based on a Wireless Surface EMG System [83]Journal
SPS 34Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning [84]Journal
SPS 35Real-Time Hand Gesture Recognition Model Using Deep Learning Techniques and EMG Signals [85]Conference
SPS 36Real-Time Hand Gesture Recognition Based on Artificial Feed-Forward Neural Networks and EMG [86]Conference
SPS 37Pattern Recognition-Based Real Time Myoelectric System for Robotic Hand Control [87]Conference
SPS 38Hand Gesture Recognition and Classification Technique in Real-Time [88]Conference
SPS 39Forearm Muscle Synergy Reducing Dimension of the Feature Matrix in Hand Gesture Recognition [89]Conference
SPS 40EMG Wrist-Hand Motion Recognition System for Real-Time Embedded Platform [90]Conference
SPS 41Robust Real-Time Embedded EMG Recognition Framework Using Temporal Convolutional Networks on a Multicore IoT Processor [91]Journal
SPS 42A Multi-Gestures Recognition System Based on Less sEMG Sensors [92]Conference
SPS 43A Fully Embedded Adaptive Real-Time Hand Gesture Classifier Leveraging HD-sEMG & Deep Learning [93]Journal
SPS 44Real-time Pattern Recognition for Hand Gesture Based on ANN and Surface EMG [94]Conference
SPS 45Adjacent Features for High-Density EMG Pattern Recognition [95]Conference
SPS 46Automatic EMG-based Hand Gesture Recognition System Using Time-Domain Descriptors and Fully-Connected Neural Networks [96]Conference
SPS 47Artificial Neural Network to Detect Human Hand Gestures for a Robotic Arm Control [97]Conference
SPS 48Electromyography-Based Hand Gesture Recognition System for Upper Limb Amputees [98]Journal
SPS 49Robust Hand Gesture Recognition With a Double Channel Surface EMG Wearable Armband and SVM classifier [99]Journal
SPS 50Fuzzy Classification of Hand’s Motion [100]Conference
SPS 51EMG-Based Online Classification of Gestures With Recurrent Neural Networks [101]Journal
SPS 52Teleoperated Robotic Arm Movement Using Electromyography Signal With Wearable Myo Armband [102]Journal
SPS 53Identification of Gesture Based on Combination of Raw sEMG and sEMG Envelope Using Supervised Learning and Univariate Feature Selection [103]Journal
SPS 54Surface EMG Hand Gesture Recognition System Based on PCA and GRNN [104]Journal
SPS 55Dexterous Hand Gestures Recognition Based on Low-Density sEMG Signals for Upper-Limb Forearm amputees [105]Journal
SPS 56Real-Time Surface EMG Pattern Recognition for Hand Gestures Based on an Artificial Neural Network [106]Journal
SPS 57On the Usability of Intramuscular EMG for Prosthetic Control: A Fitts’ Law Approach [107]Journal
SPS 58Validation of a Selective Ensemble-Based Classification Scheme for Myoelectric Control Using a Three-Dimensional Fitts’ Law Test [108]Journal
SPS 59Support Vector Regression for Improved Real-Time, Simultaneous Myoelectric Control [109]Journal
SPS 60Real-Time and Simultaneous Control of Artificial Limbs Based on Pattern Recognition Algorithms [110]Journal
SPS 61On the Robustness of Real-Time Myoelectric Control Investigations: A Multiday Fitts’ Law approach [111]Journal
SPS 62Regression Convolutional Neural Network for Improved Simultaneous EMG Control [112]Journal
SPS 63A Comparison of the Real-Time Controllability of Pattern Recognition to Conventional Myoelectric Control for Discrete and Simultaneous Movements [31]Journal
SPS 64A Real-Time Comparison Between Direct Control, Sequential Pattern Recognition Control and Simultaneous Pattern Recognition Control Using a Fitts’ Law Style Assessment Procedure [113]Journal
SPS 65Evaluation of Computer-Based Target Achievement Tests for Myoelectric Control [46]Journal
Table 4. Inclusion and exclusion criteria used in this systematic literature review (SLR).
Table 4. Inclusion and exclusion criteria used in this systematic literature review (SLR).
InclusionPrimary studies about the development of the Hand Gesture Recognition (HGR) model.
CriteriaPrimary studies that use electromyography (EMG) as input of the HGR model.
The full text of the primary study was not available.
ExclusionPrimary studies that do not use machine learning (ML) in the HGR model.
CriteriaPrimary studies that do not indicate that their models are in real time.
Primary studies that are in another language than English.
Primary studies that are not peer-reviewed.
Table 5. The data extracted from the 65 SPS and their targets.
Table 5. The data extracted from the 65 SPS and their targets.
Extracted DataTarget
Publication yearStudy overview
Primary study typeStudy overview
Structure of the HGR modelRQ1
Controller delay of the HGR modelRQ2
Hardware usedRQ2
Number of gestures recognizedRQ3
Types of gestures recognizedRQ3
Metrics and results used to evaluate the HGR modelsRQ4
Table 6. Standard structure used by the 65 HGR models.
Table 6. Standard structure used by the 65 HGR models.
ID SPSStages of the Standard Structure
DASEGMPREPFECLPOSTP
SPS 1yesyesnoyesyesno
SPS 2yesyesyesyesyesyes
SPS 3yesyesnoyesyesno
SPS 4yesyesnonoyesno
SPS 5yesyesnoyesyesno
SPS 6yesyesnoyesyesno
SPS 7yesyesnoyesyesno
SPS 8yesyesyesnoyesyes
SPS 9yesyesyesnoyesyes
SPS 10yesyesyesyesyesno
SPS 11yesyesyesyesyesyes
SPS 12yesnoyesyesyesno
SPS 13yesyesnoyesyesno
SPS 14yesnoyesyesyesno
SPS 15yesyesyesyesyesno
SPS 16yesyesyesyesyesno
SPS 17yesnoyesyesyesno
SPS 18yesnonoyesyesno
SPS 19yesyesnoyesyesno
SPS 20yesyesnoyesyesno
SPS 21yesyesyesyesyesyes
SPS 22yesnoyesyesyesno
SPS 23yesnoyesyesyesno
SPS 24yesnonoyesyesno
SPS 25yesyesyesyesyesno
SPS 26yesyesnoyesyesno
SPS 27yesyesyesnoyesno
SPS 28yesnoyesnoyesno
SPS 29yesnoyesyesyesno
SPS 30yesyesyesnoyesno
SPS 31yesyesyesyesyesno
SPS 32yesyesyesyesyesno
SPS 33yesyesyesyesyesno
SPS 34yesyesnoyesyesno
SPS 35yesyesyesnoyesyes
SPS 36yesnoyesyesyesno
SPS 37yesyesyesyesyesno
SPS 38yesnoyesyesyesno
SPS 39yesyesyesyesyesno
SPS 40yesyesnoyesyesno
SPS 41yesyesyesnoyesyes
SPS 42yesyesyesyesyesyes
SPS 43yesyesyesyesyesno
SPS 44yesyesnoyesyesno
SPS 45yesyesnonoyesno
SPS 46yesyesnoyesyesno
SPS 47yesnoyesnoyesyes
SPS 48yesyesyesyesyesyes
SPS 49yesyesnoyesyesno
SPS 50yesyesyesyesyesno
SPS 51yesyesyesyesyesno
SPS 52yesnonoyesyesyes
SPS 53yesyesnoyesyesno
SPS 54yesyesyesyesyesno
SPS 55yesnonoyesyesno
SPS 56yesyesyesyesyesno
SPS 57yesyesyesyesyesno
SPS 58yesyesyesnoyesno
SPS 59yesyesyesyesyesno
SPS 60yesyesnoyesyesyes
SPS 61yesyesnoyesyesno
SPS 62yesyesyesnoyesno
SPS 63yesyesyesyesyesyes
SPS 64yesyesyesyesyesyes
SPS 65yesyesyesyesyesyes
yes: The model used this stage; no: The model did not use this stage; DA: Data Acquisition Stage; SEGM: Segmentation Stage; PREP: Preprocessing Stage; FE: Feature Extraction Stage; CL: Classification stage; POSTP: Postprocessing Stage.
Table 7. The number of sensors, sampling rate, acquisition devices, segmentation techniques, and preprocessing techniques used in the 65 HGR models.
Table 7. The number of sensors, sampling rate, acquisition devices, segmentation techniques, and preprocessing techniques used in the 65 HGR models.
ID SPSNumber of SensorsSampling Rate (Hz)Acquisition Device UsedSegmentation Technique UsedPreprocessing Techinique Used
SPS 121000MA300ASWNI
SPS 28200Myo armbandOSWFL and RE
SPS 38200Myo armbandOSW and GDNI
SPS 48200Myo armbandASWNI
SPS 581000Homemade deviceOSWNI
SPS 6161600Homemade deviceOSWNI
SPS 78200Myo armbandOSW and GDNI
SPS 88200Myo armbandOSWFL andRE
SPS 98200Myo armbandOSW andGDFL and RE
SPS 1031000Homemade deviceASWFL and OC
SPS 1121000Homemade deviceASWPreS
SPS 123NIHomemade deviceNIFLandRE
SPS 138200Myo armbandOSWNI
SPS 143NIHomemade deviceNIFL
SPS 151NIHomemade deviceASW and GDRE
SPS 1641600Homemade deviceASW and GDRE
SPS 178200Myo armbandNIFL
SPS 188200Myo armbandNINI
SPS 198200Myo armbandOSWNI
SPS 208200Myo armbandOSWNI
SPS 21161600Homemade deviceOSWFL
SPS 221125Homemade deviceNIFL
SPS 232NIHomemade deviceNIFL
SPS 243NIHomemade deviceNINI
SPS 258960Bio Radio 150ASWFL
SPS 2611000Homemade deviceASWNI
SPS 2781000Homemade deviceGDFL
SPS 284500Homemade deviceNIFL
SPS 294NIHomemade deviceNIFL
SPS 3041000Homemade deviceOSW and GDFL, OC and RE
SPS 3141000Homemade deviceOSWOC
SPS 3241000Homemade deviceASWFL
SPS 334500Homemade deviceASW and GDFL
SPS 348200Myo armbandOSWNI
SPS 358200Myo armbandOSW and GDRE
SPS 368200Myo armbandOSWFL and RE
SPS 3721000Homemade deviceOSW and GDFL and AMPL
SPS 3811000Homemade deviceOSWFL
SPS 3961000ME6000NIFL
SPS 408200Myo armbandOSWNI
SPS 4184000Analog Front End (ADS1298)OSWNI
SPS 422NITelemyo 2400T G2ASWNI
SPS 43321000Homemade deviceNIFL, RE and TKEO
SPS 448200Myo armbandASWFL and RE
SPS 451282048EMG-USB2OSWFL
SPS 468200Myo armbandASWNI
SPS 478200Myo armbandOSWFL and NORM
SPS 4881000Analog Front End (ADS1298)OSWFL
SPS 4921000Homemade deviceNIFL and NORM
SPS 504NIHomemade deviceGDFL and AMPL
SPS 5116200Myo armbandNINI
SPS 528200Myo armbandOSWNI
SPS 5322000Homemade deviceASW and GDFL and RE
SPS 5416NIHomemade deviceNINI
SPS 5541000Homemade deviceOSWNI
SPS 568200Myo armbandOSWFL and RE
SPS 5741000Homemade deviceOSWFL
SPS 5861000Homemade deviceOSWFL
SPS 5981000Homemade deviceOSWFL
SPS 6042000Homemade deviceOSWNI
SPS 618200Myo armbandOSWNI
SPS 6281200Homemade deviceOSWFL
SPS 638-121000Homemade deviceOSWFL
SPS 6461000Homemade deviceOSWFL
SPS 654200Homemade deviceOSWFL
NI: Not indicated; OSW: Overlapping Sliding Windowing; ASW: Adjacent Sliding Windowing; GD: Gesture Detection; FL: Filtering; RE: Rectification; OC: Offset Compensation; PreS: Pre-smoothing; AMPL: Amplification; TKEO: Teager-Kaiser-Energy Operator; NORM: Normalization.
Table 8. Features according to the domain.
Table 8. Features according to the domain.
Time-Domain FeaturesMean absolute value (MAV), root mean square (RMS), waveform length (WL), zero crossings (ZC), fourth-order autoregressive coefficients (AR-Coeff), standard deviation (SD), variance (VAR), slope sign changes (SSC), mean, median, integrated EMG (iEMG), sample entropy (SampEn), mean absolute value ratio (MAVR), modified mean absolute value (MMAV), simple square integral (SSI), Log detector (LOG), average amplitude change (AAC), maximum fractal length (MFL), minimum (MIN), maximum (MAX), Hjorth parameters (HJP), peak value (PK), energy ratio (ER), histogram (HISTG), willison amplitude (WAMP), kurtosis (KURT), skewness (SKEW), non-negative matrix factorization (NMF), natural logarithm of the variance (ln-VAR), root sum square (RSS), logarithm of the root mean square (log-RMS), logarithm of the integrated EMG (log-iEMG), logarithm of the variance (log-VAR), logarithmic band power (LBP), first derivation (DIFF), detrended fluctuation analysis (DFA), modified mean absolute values (MAV1-MAV2), V-order, difference absolute standard deviation value (DASDV), max-min, autoregressive model intercept (Inpt), cardinality (CARD)
Frequency-Domain FeaturesAmplitude spectrum (AmpSpec), mean frequency (MNF), median frequency (MDF), modified median frequency (MMDF), modified mean frequency (MMNF), mean power (MNP), cepstral coefficients (Cep-Coeff), circulant matrix structure for eigenvalue decomposition (CMSED), fast Fourier transform (FFT), median amplitude spectrum (MAS), peak frequency (PKF), total power (TTP), power spectrum ratio (PSR)
Time-Frequency-Domain FeaturesDiscrete wavelet transform (DWT), continuous wavelet transform (CWT), mean of the absolute wavelet coefficients (MOAC), average power of the wavelet coefficients (APOC), standard deviation of the wavelet coefficients (STDOC), MOAC-ratio
Space-Domain FeaturesScaled mean absolute value (SMAV), mean absolute difference of the normalized values (MADN)
Fractal-Domain FeaturesDe-trended fluctuation analysis (DFA), Higuchi fractal dimension (HFD)
Table 9. Features used in the 65 HGR models.
Table 9. Features used in the 65 HGR models.
ID SPSFeatures used
SPS 1MAV
SPS 2MAV, RMS, WL, SSC, and HJP
SPS 3RMS
SPS 4NI
SPS 5MAV, WL, ZC, and SSC
SPS 6MAV
SPS 7MAV, RMS, ZC, VAR, ER, HISTG, WAMP, AmpSpec, MMDF, and MMNF
SPS 8NI
SPS 9NI
SPS 10RMS
SPS 11MAV, AR-Coeff, VAR, and SampEn
SPS 12WL, VAR, iEMG, and PK
SPS 13SMAV, and MADN
SPS 14AR-Coeff, and Mean
SPS 15Mean
SPS 16MAV
SPS 17MAV, SD, and DWT
SPS 18MAV
SPS 19CMSED
SPS 20FFT
SPS 21MAV, WL, ZC, and SSC
SPS 22RMS, SD, and SampEn
SPS 23DWT
SPS 24iEMG
SPS 25MAV, RMS, SD, MMAV, SSI, LOG, AAC, MFL, MIN, and MAX
SPS 26DWT
SPS 27NI
SPS 28NI
SPS 29AR-Coeff
SPS 30NI
SPS 31MAV, RMS, MNP, and DFA
SPS 32DWT
SPS 33MAV, WL, ZC, and MAVR
SPS 34CWT
SPS 35NI
SPS 36NI
SPS 37RMS, WL, WAMP, SampEn, and Cep-Coeff
SPS 38Mean, VAR, KURT, and SKEW
SPS 39NMF
SPS 40iEMG, ln-VAR, and RSS
SPS 41NI
SPS 42log-RMS, log-iEMG, log-VAR
SPS 43NI
SPS 44MAV, RMS, SSC, WL, and HJP
SPS 45SMAV, and MADN
SPS 46MAV, ZC, SSC, SKEW, RMS, HJP, and iEMG
SPS 47RMS, and Median
SPS 48RMS, WL, ZC, and SSC
SPS 49Mean
SPS 50RMS, LBP, and DIFF
SPS 51SD
SPS 52MAV, WL, RMS, AR-Coeff, ZC, and SSC
SPS 53MAV, MAV1-MAV2, VAR, RMS, SSI, V-order, iEMG, DASDV, AAC, ZC, LOG, SSC, WL, WAMP, MFL, MAX, MIN, max-min, SKEW, KURT, TTP, MNF, MDF, MNP, PKF, MOAC, APOC, STDOC, MOAC-ratio, Inpt, AR-Coeff
SPS 54RMS, WL, MAS and SampEn
SPS 55MAV, MAV1, MAV2, VAR, RMS, WL, ZC, SSC, AR-Coeff, MNF, MDF, PKF, MNP, TTP, PSR, DFA, and HFD
SPS 56MAV, SSC, WL, RMS, and HJP
SPS 57MAV, WL, ZC, and SSC
SPS 58NI
SPS 59MAV, WL, ZC, and SSC
SPS 60MAV, WL, ZC, and SSC
SPS 61MAV, WL, ZC, SSC, WAMP, and CARD
SPS 62NI
SPS 63MAV, WL, ZC, and SSC
SPS 64MAV, WL, ZC, and SSC
SPS 65MAV, WL, ZC, and SSC
Table 10. Time of data collection and data analysis, and hardware used in the 65 HGR models.
Table 10. Time of data collection and data analysis, and hardware used in the 65 HGR models.
ID SPSDCT(ms)DAT(ms)Hardware Used
SPS 1250NINI
SPS 2100029.38Personal computer
SPS 310037.9Personal computer
SPS 4250NINI
SPS 5250NINI
SPS 6250NINI
SPS 7300500NI
SPS 81000250Personal Computer
SPS 91000193.1Personal Computer
SPS 107241Embedded System
SPS 1125070NI
SPS 12NININI
SPS 13200NINI
SPS 14NININI
SPS 15NI10NI
SPS 162500.2Embedded System
SPS 17NINIPersonal Computer
SPS 18NININI
SPS 19500NINI
SPS 2028515Personal Computer
SPS 212507.57Personal Computer
SPS 22NININI
SPS 23250NINI
SPS 24NINIEmbedded System
SPS 252000NINI
SPS 26NININI
SPS 27NINIEmbedded System
SPS 28NINIEmbedded System
SPS 29NININI
SPS 30NI2.5Personal Computer
SPS 31250NIPersonal Computer
SPS 32256NINI
SPS 3364NIPersonal computer
SPS 34260NINI
SPS 3520003Personal computer
SPS 36250011Personal computer
SPS 37200NIPersonal computer
SPS 38100NIPersonal computer
SPS 39256152.71NI
SPS 402504.5Embedded System
SPS 4115012.8Embedded System
SPS 4220046.4Personal computer
SPS 432005Embedded System
SPS 44NI233.4NI
SPS 45200NINI
SPS 46250NINI
SPS 47NININI
SPS 48200NIEmbedded System
SPS 49800NINI
SPS 50400NIEmbedded System
SPS 51500NIPersonal computer
SPS 52240NIPersonal computer
SPS 5332NIPersonal computer
SPS 54NI190NI
SPS 55300NINI
SPS 56400227.76NI
SPS 57160<16NI
SPS 58160<16NI
SPS 592002NI
SPS 6020050Personal computer
SPS 61200<50NI
SPS 621676Personal computer
SPS 63250<50NI
SPS 64250<50NI
SPS 65200<50Personal computer
NI: Not indicated.
Table 11. The number of gestures recognized (i.e., classes), number of gestures per person in the training set (NGpPT), the number of people who participated in the training (NPT), the number of gestures per person in the evaluation set (NGpPE), the type of gestures recognized, the state of the EMG data used, and the duration of the gestures (DG).
Table 11. The number of gestures recognized (i.e., classes), number of gestures per person in the training set (NGpPT), the number of people who participated in the training (NPT), the number of gestures per person in the evaluation set (NGpPE), the type of gestures recognized, the state of the EMG data used, and the duration of the gestures (DG).
ID SPSClassesNGpPTNPTNGpPETGRStEMGDG (s)
SPS 14201320StaticNI5
SPS 252510150StaticNISTG
SPS 363001300StaticSteady and Transient4
SPS 48 *NININIStaticNINI
SPS 59 *905NIStaticNINI
SPS 613 *65865StaticNI4
SPS 75251425StaticNINI
SPS 852510150StaticNISTG
SPS 952510150StaticNISTG
SPS 103 *151NIStaticNINI
SPS 118 *806NIStaticNI10
SPS 1210NININIStaticNINI
SPS 139NI17NIStaticNINI
SPS 144NININIStaticNI1
SPS 153183150StaticNINI
SPS 16103004NIStaticTransientSTG
SPS 1717 *1360051700StaticNINI
SPS 1820NININIStaticNI30
SPS 196NININIStaticNISTG
SPS 207 *21NINIStaticNI1
SPS 2113 *65865StaticNI4-6
SPS 224NI20NIStaticNINI
SPS 236NI80NIStaticNINI
SPS 2463001300StaticNINI
SPS 252610401520Static and DynamicNI2
SPS 263NI4NIStaticNINI
SPS 277 *NI4NIStaticNINI
SPS 286 *18942StaticSteady3
SPS 294NININIStaticNINI
SPS 303 *NI1NIStaticNINI
SPS 316 *54536StaticNI5-6
SPS 328NI10NIStaticNI5
SPS 331050061800StaticNISTG
SPS 347 *281984StaticNI0.95
SPS 35525050250StaticNISTG
SPS 366 *3010150StaticNISTG
SPS 375 *20610StaticNI5
SPS 382 *NI5NIStaticNINI
SPS 395405160StaticNI4
SPS 409 *901090StaticSteady5
SPS 419 *5403540StaticSteady3
SPS 426NI8NIStaticNI5
SPS 438NININIStaticSteady and Transient5
SPS 446 *1801150StaticNINI
SPS 454794547NISteady6
SPS 467 *NI17NIStaticNI20
SPS 479NI1NIStaticNI10
SPS 486 *NI4150StaticTransientSTG
SPS 494407100StaticNINI
SPS 505510NINIStaticNINI
SPS 518 *5281176StaticNI2
SPS 527 *56648StaticNI5
SPS 53945020450StaticNI1
SPS 549 *250NI60StaticNI5
SPS 5513NI10NIStaticSteady6
SPS 5652512150StaticNI2 (training), and 5 (testing)
SPS 575 *10948StaticNI3
SPS 587 *2810144StaticNI2
SPS 5914561084StaticNI7
SPS 6011 *33106StaticNI3
SPS 615 *751072StaticSteady4
SPS 629 *321048StaticNI12
SPS 63832440StaticNI3
SPS 645 *4011270Staticni3
SPS 657 *211148StaticSteady and Transient3
NI: Not indicated; *: Including the rest gesture; NGpPT: Number of Gestures per Person in the Training set; NPT: Number of People Who Participated in the Evaluation; NGpPE: Number of Gestures per Person in the Evaluation set; TGR: Type of Gestures Recognized; StEMG: State of the EMG; DG: Duration of the Gestures; STG: Short-Term Gesture.
Table 12. The evaluation metrics for machine learning used by the 56 HGR models.
Table 12. The evaluation metrics for machine learning used by the 56 HGR models.
Evaluation MetricIDs of the SPS
AccuracyAll HGR models, except SPS 18, SPS 37, and SPS 38
RecallSPS 2, SPS 3, SPS 4, SPS 8, SPS 9, SPS 12, SPS 14, SPS 17, SPS 18, SPS 19, SPS 24, SPS 26, SPS 28, SPS 29, SPS 31, SPS 33, SPS 35, SPS 36, SPS 39, SPS 40, SPS 42, SPS 44, SPS 46, SPS 49, SPS 53, SPS 55, and SPS 56
PrecisionSPS 2, SPS 8, SPS 9, SPS 14, SPS 35, SPS 36, SPS 44, SPS 53, and SPS 56
Accuracy per UserSPS 1, SPS 5, SPS 6, SPS 16, SPS 26, SPS 31, SPS 33, SPS 38, SPS 39, SPS 48, SPS 52, SPS 53, and SPS 56
Recall per UserSPS 15, and SPS 26
Precision per UserSPS 15, and SPS 39
Median of the Accuracy per UserSPS 6
Standard Deviation of the Accuracy per UserSPS 1, SPS 5, SPS 7, SPS 20, SPS 35
Standard Deviation of the Accuracy per ClassSPS 17
Standard Deviation of each User AccuracySPS 5
Standard Deviation of the Recalls of each ClassSPS 17
Kappa IndexSPS 46
Accuracy ErrorSPS 37
Table 13. The accuracy, number of people who participated in the evaluation, type of data set (i.e., balanced or unbalanced), and the use of cross-validation by the 56 HGR models.
Table 13. The accuracy, number of people who participated in the evaluation, type of data set (i.e., balanced or unbalanced), and the use of cross-validation by the 56 HGR models.
ID SPSModel Classification Accuracy (%)NPEType of Data SetCross-Validation
SPS 194.0013balancedNI
SPS 290.7010balancedNI
SPS 399.001balancedyes
SPS 493.0010balancedNI
SPS 592.205balancedyes
SPS 682.398unbalancedNI
SPS 795.6414balancedyes
SPS 886.0010balancedNI
SPS 989.5010balancedNI
SPS 1085.001NINI
SPS 1197.356NINI
SPS 1289.00NIbalancedNI
SPS 1382.4317NINI
SPS 1487.00NININI
SPS 1590.003balancedNI
SPS 1694.004balancedyes
SPS 1789.385balancedNI
SPS 18NININIyes
SPS 1997.30NININI
SPS 2097.9018NINI
SPS 2189.008balancedyes
SPS 2297.5020NINI
SPS 2397.5080balancedyes
SPS 2471.001balancedNI
SPS 2582.301balancedyes
SPS 2693.254balancedNI
SPS 2789.204NINI
SPS 2891.809balancedyes
SPS 2993.0010balancedNI
SPS 3083.901NIyes
SPS 3188.005balancedyes
SPS 3295.0010balancedyes
SPS 3390.006balancedyes
SPS 3498.3117balancedyes
SPS 3585.0860balancedNI
SPS 3690.110balancedNI
SPS 37NI6balancedNI
SPS 38NI5NINI
SPS 3996.085balancedyes
SPS 4099.0310balancedNI
SPS 4197.013balancedyes
SPS 4291.938balancedNI
SPS 4398.15NIbalancedNI
SPS 4496.701balancedNI
SPS 4582.115NIyes
SPS 4699.7817balancedNI
SPS 4790.301balancedyes
SPS 4894.144balancedNI
SPS 4990.007balancedNI
SPS 5073.00NININI
SPS 5195.31*1balancedNI
SPS 5295.206balancedyes
SPS 5395.0020balancedNI
SPS 5495.10NININI
SPS 5599.2010NINI
SPS 5698.7012balancedNI
NI: Not indicated; NPE: Number of people who participated in the Evaluation; *: This is recognition accuracy (i.e., this model determines what gesture and when this gesture was performed by a person); yes: This model uses cross-validation.
Table 14. Metrics of the target achievement test used by the nine HGR models.
Table 14. Metrics of the target achievement test used by the nine HGR models.
MetricDescription
ThroughputRatio between the index of difficulty and the movement time, which is the time (in seconds) [107].
Path EfficiencyRatio between the straight line distance and the actual distance traveled [107,126].
OvershootRatio between overshoots and number of targets. The ability to stop on a target [107,126].
Average SpeedAverage nonzero speed of the cursor over the course of the trial [107,126].
Completion RateRatio between the completed trials and the number of trials within the allowed time (i.e., trial time) [50,126].
Stopping DistanceTotal distance traveled (path length) during the dwell time [108].
Completion TimeTime from movement initiation to the completion of the trial [31].
Real-time AccuracyRatio between correct predictions and number of predictions during the completion time [127].
Length ErrorRatio between distance beyond the total required distance, and the total required distance [31].
Reaction TimeTime from a target appearance and the first move of the cursor/virtual prosthesis [113].

Share and Cite

MDPI and ACS Style

Jaramillo-Yánez, A.; Benalcázar, M.E.; Mena-Maldonado, E. Real-Time Hand Gesture Recognition Using Surface Electromyography and Machine Learning: A Systematic Literature Review. Sensors 2020, 20, 2467. https://doi.org/10.3390/s20092467

AMA Style

Jaramillo-Yánez A, Benalcázar ME, Mena-Maldonado E. Real-Time Hand Gesture Recognition Using Surface Electromyography and Machine Learning: A Systematic Literature Review. Sensors. 2020; 20(9):2467. https://doi.org/10.3390/s20092467

Chicago/Turabian Style

Jaramillo-Yánez, Andrés, Marco E. Benalcázar, and Elisa Mena-Maldonado. 2020. "Real-Time Hand Gesture Recognition Using Surface Electromyography and Machine Learning: A Systematic Literature Review" Sensors 20, no. 9: 2467. https://doi.org/10.3390/s20092467

APA Style

Jaramillo-Yánez, A., Benalcázar, M. E., & Mena-Maldonado, E. (2020). Real-Time Hand Gesture Recognition Using Surface Electromyography and Machine Learning: A Systematic Literature Review. Sensors, 20(9), 2467. https://doi.org/10.3390/s20092467

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop