Next Article in Journal
Advances in Biomimetics: Combination of Various Effects at Different Scales
Next Article in Special Issue
Deep Learning in the Ubiquitous Human–Computer Interactive 6G Era: Applications, Principles and Prospects
Previous Article in Journal
Effect of the Mechanical Properties of Soft Counter-Faces on the Adhesive Capacity of Mushroom-Shaped Biomimetic Microstructures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Myoelectric Control for Prosthetic Hand Manipulation

1
Laboratory for Embedded System and Intelligent Robot, Wuhan University of Science and Technology, Wuhan 430081, China
2
Institute for Artificial Intelligence, State Key Lab of Intelligent Technology and Systems, Department of Computer Science and Technology, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
3
School of Engineering and Technology, China University of Geosciences, Beijing 100083, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Biomimetics 2023, 8(3), 328; https://doi.org/10.3390/biomimetics8030328
Submission received: 29 May 2023 / Revised: 14 July 2023 / Accepted: 19 July 2023 / Published: 24 July 2023
(This article belongs to the Special Issue Intelligent Human-Robot Interaction)

Abstract

:
Myoelectric control for prosthetic hands is an important topic in the field of rehabilitation. Intuitive and intelligent myoelectric control can help amputees to regain upper limb function. However, current research efforts are primarily focused on developing rich myoelectric classifiers and biomimetic control methods, limiting prosthetic hand manipulation to simple grasping and releasing tasks, while rarely exploring complex daily tasks. In this article, we conduct a systematic review of recent achievements in two areas, namely, intention recognition research and control strategy research. Specifically, we focus on advanced methods for motion intention types, discrete motion classification, continuous motion estimation, unidirectional control, feedback control, and shared control. In addition, based on the above review, we analyze the challenges and opportunities for research directions of functionality-augmented prosthetic hands and user burden reduction, which can help overcome the limitations of current myoelectric control research and provide development prospects for future research.

1. Introduction

Motor neurons integrate inputs from the central nervous system (CNS) and incoming feedback and transform them into neural drive to the muscles [1]. In simple terms, motor neurons release action potentials to the muscles, translating these neural commands into force and movement. A motor unit (MU) is a functional unit that consists of a motor neuron and the muscle fibers innervated by it. It is the smallest neurologically controlled unit that describes the process of muscle contraction. The sum of action potentials in the muscle fibers of an MU represents the motor unit action potential (MUAP) [2].
The electromyographic (EMG) signal, which can be captured by sensing devices, is obtained through the convolution of each motor neuron’s pulse sequence with MUAP [3]. In reality, the EMG signal is a composite signal resulting from the superposition of multiple MUAP sequences [4]. Therefore, the EMG signal directly reflects the discharge characteristics of the MUs within the measured muscle. The amplitude of EMG signals typically ranges between ±5000 μ V, while the frequency range is usually between 6 and 500 Hz, with the main frequency power falling within the range of 20 to 150 Hz [5].
Based on the placement of sensing devices, the measured EMG signals can be divided into intramuscular electromyography (iEMG) and surface electromyography (sEMG). iEMG is obtained by implantable sensing devices placed inside the subject’s body. Its advantage lies in its ability to capture EMG signals from specific muscle locations, which is more conducive to exploring the relationship between muscle activation and task execution. However, a drawback is that implantable sensing devices inevitably cause physical harm to the subject’s limb and raise concerns about long-term adaptability. On the other hand, sEMG is acquired through sensing devices placed on the surface of the skin. Although it can only capture combined EMG signals from muscle groups, it has been favored in research on myoelectric prosthetic hand control due to its ease of acquisition and rich information content.
The concept of myoelectric control was first proposed in 1948, but it was not until the 1960s that myoelectric prosthetics had a significant clinical impact [6]. Despite decades of satisfying laboratory research results, the high abandonment rate of prosthetic hands has not been improved, which is mainly due to amputees not being able to receive intuitive and natural myoelectric control. The development stages of myoelectric control can be summarized as (1) switch control strategy; (2) proportional control strategy; (3) pattern recognition control strategy; and (4) simultaneous and proportional control strategy:
  • The switch control strategy is a simple technique that uses smoothed and rectified sEMG signals and predefined thresholds to achieve single-degree-of-freedom control of prosthetic hands, such as grasp or wrist rotation. Specifically, the principle of this strategy is to establish a mapping between sEMG amplitude and activation of prosthetic hand movement. If the amplitude is greater than a manually preset threshold, the prosthetic hand will execute the action at a constant speed/force;
  • The proportional control method can achieve variable speed/force movements of the prosthetic hand based on the proportion of user input signals. The proportional control strategy establishes a mapping between sEMG amplitude and the degree of movement of the prosthetic hand, where the description variable of the degree of movement can be force, speed, position, or another mechanical output;
  • Pattern recognition technology is a method based on feature engineering and classification techniques and is currently a research hotspot in myoelectric control. The principle of pattern recognition control strategy is that similar sEMG signal features will be reproduced in experiments with the same action pattern. These features can be used as the basis for distinguishing different action patterns, thereby recognizing a wider variety of action patterns than the input channel number. The pattern recognition control strategy simplifies the representation of hand movements, effectively reduces the difficulty of the task, and significantly improves the accuracy of traditional motion intent recognition;
  • The simultaneous and proportional control strategy aims to capture the entire process of the user’s execution of hand movements, including the different completion stages of a single action and the transition stages between different actions, which is a more complex and dynamic process. Compared to the above control strategy, the multi-degree-of-freedom simultaneous proportional control strategy does not rely on pre-set action patterns but instead estimates the hand state at a single moment in real-time based on regression methods, such as joint angles, positions, or torques. This feature allows users to control the myoelectric prosthetic hand more intuitively and naturally, making it a new research hotspot in the field of myoelectric control in recent years.
Despite the progress made in the past decade in areas such as biocompatible electrodes, surgical paradigms, and mechatronics integration [7], the development of current myoelectric prosthetic hand control schemes have been hindered by limitations in the field of motion intention recognition, resulting in a lack of intuitive and robust human–machine interfaces. Previous works have reviewed various aspects of research progress, including methods for predicting continuous upper limb movements based on sEMG [8], the application of deep learning in multi-task human–machine interaction (HMI) based on sEMG [9], and various performance indices in myoelectric control [10]. Existing myoelectric control research for prosthetic hand interaction mainly focuses on intention recognition and control strategy, aiming to accurately decode human intention through recognition algorithms and drive the prosthetic hand to execute the intent through control algorithms. However, further research is supposed to develop myoelectric control schemes for complex daily manipulation scenarios [11]. There is still a lack of attention to the challenges and opportunities that may be encountered in the development of myoelectric prosthetic hand manipulation capabilities.
The main contributions of this article are as follows: (1) We provide a comprehensive review of the basic concepts in myoelectric control, with a focus on mapping human intention to control parameters; (2) We divide the current state of myoelectric control research into two aspects for a thorough review, namely, intention recognition and control strategy; (3) We discuss the current challenges and future research directions of prosthetic hand manipulation, aiming to provide a novel perspective for the advancement of the myoelectric control field.
The remainder of the article is organized as follows: Section 2 presents the basic concepts of myoelectric control; Section 3 presents current research status, including advances in intention recognition and control strategy research; Section 4 presents challenges and opportunities in the future of myoelectric control; Section 5 presents the conclusions of the article.

2. Basic Concepts of Myoelectric Control

In this section, we introduce the basic concepts of myoelectric control, which include sEMG signal processing, decoding models, and mapping parameters. The approach to myoelectric control involves first characterizing the user’s motion intention as specific physical parameters, which are then translated into control commands to achieve motion control of the prosthetic hand. The specific steps are illustrated in Figure 1. A summary of the related research for each section is provided in Table 1.

2.1. sEMG Signal Processing

sEMG signal analysis refers to a series of processing steps applied to the acquired human sEMG signals using signal acquisition devices. The primary objective is to eliminate irrelevant noise unrelated to the intended movements while retaining as many useful features as possible. This process aims to accurately identify the user’s intended movements.

2.1.1. Pre-Processing

Common mode interference signals can have an impact on the sEMG signals of the human torso [53], such as 50 Hz power line interference, line voltage, and contamination from myocardial electrical activity [54]. Additionally, inherent instability exists within the sEMG signals themselves. The sEMG signals contain noise within the 0–20 Hz frequency range, influenced by the firing rate of MUs, while high-frequency signals exhibit a lower power spectral density [55]. Therefore, to improve the quality of sEMG signals, commonly used filters are employed to remove frequency components below 20 Hz and above 500 Hz, as well as interference signals at around 50 Hz.
Applying only filtering processing to the raw signals and inputting them into the decoding model can maximize the retention of useful information from the human signals. This approach aims to enhance the performance and practicality of the intent recognition system for real-world applications in myoelectric prosthetic hand systems. However, achieving significant accuracy in intent recognition solely through filtering the sEMG signals requires reliance on a decoding model with feature learning capabilities. The hierarchical structure of the decoding model transforms simple feature representations into more abstract and effective ones [56].

2.1.2. Feature Engineering

Traditional feature engineering for analyzing EMG signals can typically be categorized into three types: time-domain, frequency-domain, and time–frequency-domain features [57]. Time-domain features extract information about the signal from its amplitude, frequency-domain features provide information about the power spectral density of the signal, and time–frequency-domain features represent different frequency information at different time positions [58]. Frequency-domain features have unique advantages in muscle fatigue recognition. However, a comparative experiment based on 37 different features showed that frequency-domain features are not well suited for EMG signal classification [59]. Time–frequency-domain features are also limited in their application due to their inherent computational complexity. Time-domain features are currently the most popular feature type in the field of intent recognition. Therefore, we mainly focus on single and combined features based on time-domain features.
The feature engineering techniques allow for the mapping of high-dimensional sEMG signals into a lower-dimensional space. This significantly reduces the complexity of signal processing, retaining the useful and distinguishable portions of the signal while eliminating unnecessary information. However, due to the stochastic nature of sEMG signals and the interference between muscles during movement, traditional feature engineering inevitably masks or overlooks some useful information within the signals, thereby limiting the accuracy of intent recognition. Additionally, since sEMG signals contain rich temporal and spectral information, relying on specific and limited feature combinations may not yield the optimal solution [45]. Moreover, a universally effective feature has not been identified thus far, necessitating further exploration of features that can make significant contributions to improving intent recognition performance.

2.2. Decoding Model

Decoding models serve as a bridge between user motion intentions and myoelectric prosthetic hand control commands, playing a significant role in motion intention recognition schemes. The purpose of a decoding model is to represent the linear or nonlinear relationship between the inputs and outputs of the motion intention recognition scheme, which can be achieved through either establishing an analytical relationship or constructing a mapping function. The former are known as model-based methods, while the latter are referred to as model-free methods [8], which include traditional machine learning algorithms and deep learning algorithms. In this section, we discuss the research progress in musculoskeletal models, traditional machine learning models, and deep learning models applied in sEMG-based intention recognition. Among them, three representative deep learning models are selected, namely, convolutional neural networks (CNN), recurrent neural networks (RNN), and hybrid-structured models, as illustrated in Figure 2, showcasing some application examples.

2.2.1. Musculoskeletal Models

Musculoskeletal models are the most commonly used model-based approaches for myoelectric prosthetic hand control. These models aim to understand how neural commands responsible for human motion are transformed into actual physical movements by modeling muscle activation, kinematics, contraction dynamics, and joint mechanics [63]. Incorporating detailed muscle–skeletal models in the study of human motion contributes to a deeper understanding of individual muscle and joint loads [64]. The earliest muscle model was first proposed by A. V. Hill in 1938. It was a phenomenological lumped-parameter model that provided an explanation of the input–output data obtained from controlled experiments [21]. Through the efforts of researchers, the muscle–skeletal models commonly used in the field of EMG prosthetic hand control include the Mykin [23] and simplified muscle–skeletal models [25].
Musculoskeletal models, which encode the explicit representation of the musculoskeletal system’s anatomical structure, can better simulate human physiological motion [65], and are, therefore, commonly applied in research on human motion intention recognition. However, for EMG prosthetic hand human–machine interfaces based on motion skeletal models, it is necessary to acquire sEMG signals corresponding to the muscles represented by the model. This may require the use of invasive electrodes, which inevitably pose physical harm to the subjects’ limbs and require the involvement of expert physicians, leading to various inconveniences and obstacles. Another promising approach to facilitate the wider application of muscle–skeletal models is the localization of specific muscles in the subjects’ limbs through high-density sEMG signal decomposition techniques.

2.2.2. Traditional Machine Learning Models

Machine learning algorithms typically establish mappings between inputs and desired target outputs using approximated numerical functions [8]. They learn from given data to achieve classification or prediction tasks and are widely applied in the field of motion intention recognition.
Gaussian processes are non-parametric Bayesian models commonly applied in research on human motion intention recognition. Non-negative matrix factorization (NMF) is one of the most popular algorithms in motion intention recognition based on sEMG. As the name suggests, the design concept of this algorithm is to decompose a non-negative large matrix into two non-negative smaller matrices. In the field of motion intention recognition based on sEMG, the sEMG signal matrix is often decomposed into muscle activation signals and muscle weight matrices using non-negative matrix factorization algorithms. The muscle weight matrix is considered to reflect muscle synergies [29].
Despite achieving certain results, motion intention decoding models based on traditional machine learning algorithms often rely on tedious manual feature engineering. Research has shown that methods utilizing traditional machine learning algorithms still fail to meet the requirements of current human–machine interaction scenarios, such as EMG prosthetic hand control, in terms of accuracy and real-time responsiveness [66].

2.2.3. Deep Learning Models

Deep learning algorithms can be used to classify input data into corresponding types or regress them into continuous sequences in an end-to-end manner, without the need for manual feature extraction and selection [9]. The concept of deep learning originated in 2006 [67], and, since then, numerous distinctive new algorithm structures have been developed. In this section, we discuss three commonly used deep learning methods for motion intention recognition: CNN and its variants, RNN and its variants, and hybrid-structured networks. We provide an overview of their research progress.
CNN-based models: CNN was first proposed in 1980 [68]. Due to its design of convolutional layers, it has the capability to learn general information from a large amount of data and provide multiple outputs. CNN has been applied in various fields such as image processing, video classification, and robot control. It has also found extensive applications in the field of motion intention recognition;
RNN-based models: The introduction of RNN was aimed at modeling the temporal information within sequences and effectively extracting relevant information between input sequences. However, due to the issue of vanishing or exploding gradients, RNN struggles to remember long-term dependencies [9]. To address this inherent limitation, a variant of RNN called Long short-term memory (LSTM) was introduced, which has gained significant attention in research on recognizing human motion intentions. Numerous studies have been conducted on the application of LSTM in this field;
Hybrid-structured models: Hybrid-structured deep learning algorithms typically consist of combining two or more different types of deep learning networks. For motion intention recognition based on sEMG, hybrid-structured deep learning algorithms often outperform other approaches in intention recognition tasks. One compelling reason for this is that hybrid-structured algorithms extract more abstract features from sEMG signals, potentially capturing more hidden information. This leads to improved performance in motion intention recognition.
Deep learning algorithms have been widely applied in the recognition of human motion intentions due to their unique end-to-end mapping approach. They eliminate the need for researchers to manually extract signal features and instead learn more abstract and effective features through the depth and breadth of their network structures. However, their lack of interpretability makes it challenging to integrate them with biological theories for convincing analysis. Moreover, the increased complexity of networks associated with improved recognition accuracy results in significant computational demands. This poses challenges for tasks such as EMG prosthetic hand control that require fast response times within specific time frames. Currently, most research in this area is based on offline tasks. Therefore, key technical research focuses on how to incorporate human biological theories into the design of deep learning algorithms and achieve high accuracy and fast response in motion intention recognition solely through lightweight network structures.

2.3. Mapping Parameters

Mapping parameters, as the output part of the motion recognition scheme, serve as the parameterization of user motion intention and control commands for the myoelectric prosthetic hand system. Human hand motion is controlled by approximately 29 bones and 38 muscles, offering 20–25 degrees of freedom (DoF) [69], enabling flexible and intricate movements. The movement of the human hand is achieved through the interaction and coordination of the neural, muscular, and skeletal systems [70]. This implies that the parameterization of motion intention should encompass not only the range of motion of each finger but also consider the variations in joint forces caused by different muscle contractions. In myoelectric hand control, two commonly used control methods exist [8]: (1) Using surface electromyography (sEMG) as the input for decoding algorithms, joint angles are outputted as commands for controlling low-level actuators; (2) Using sEMG as the input, joint torques are output and sent to either the low-level control loop or directly to the robot actuators (referred to as a force-/torque-based control algorithm). Based on the aforementioned analysis, this manuscript provides an overview of mapping parameters, which are categorized into three parts: kinematic parameters, dynamic parameters, and other parameters.

2.3.1. Kinematic Parameters

Parameterizing human motion intention typically involves kinematic parameters such as joint angles, joint angular velocity, and joint angular acceleration:
Joint angle: Analyzing biomechanical muscle models reveals [45] that joint angles specifically define the direction of muscle fibers and most directly reflect the state of motion;
Joint angular velocity: If we consider joint angles as the state, then joint angular velocity serves as the control vector for changing the state, representing the rate of change of joint angles in the human body per unit time. Joint angular velocity is more stable compared to other internal physical quantities, making it advantageous for better generalization to new individuals. It is closely related to the extension/flexion movement commands of each joint [71];
Joint angular acceleration: Joint angular acceleration describes the speed at which the joint motion velocity in the human body changes. The relationship between joint angular acceleration and joint angular velocity is similar to the relationship between joint angles and joint angular velocity. Some studies suggest a significant correlation between joint angular acceleration and muscle activity [72].

2.3.2. Dynamics Parameters

When using a myoelectric prosthetic hand to perform daily grasping tasks, the appropriate contact force is also a crucial factor in determining task success rate for individuals with limb loss. Among the dynamic parameters commonly used for parameterizing human motion intention, joint torque has been proven to be closely related to muscle strength [21]. For laboratory-controlled prostheses operated through a host computer, suitable operation forces can be achieved using force/position hybrid control or impedance control. However, for myoelectric prosthetic hands driven by user biological signals, effective force control of the prosthetic hand must be achieved by decoding the user’s intention. Therefore, the conversion of human motion intention into dynamic parameters is necessary.

2.3.3. Other Parameters

A common approach to parameterizing human motion intention using specific indicators involves guiding the subjects to perform specific tasks to obtain target data and using these cues as labels [73]. Motion intention recognition is then achieved using supervised learning decoding models. Representing human motion intention using other forms of parameters to some extent reduces the complexity of intention decoding tasks, simplifying the process and facilitating the application of motion intention recognition schemes to practical physical platforms. However, when relying on end-to-end mapping established by decoding models, if the mapping variables lack meaningful biological interpretations, this further reduces the persuasiveness and interpretability of the already less interpretable decoding models. Therefore, when considering non-biological alternative parameter schemes, it is important to strike a balance between control performance and the interpretability of human physiological mechanisms.

3. Current Research Status

Research on myoelectric control for prosthetic hand manipulation is primarily focused on two aspects: the first is how to accurately decode human motion intention through a recognition algorithm, and the second is how to drive the prosthetic hand to perform that intention through a control strategy. Thus, the following sections provide a review of current research progress in intention recognition and control strategy for myoelectric prosthetic hands.

3.1. Advances in Intention Recognition Research

As a forward human–machine interaction component of the myoelectric prosthetic hand system, hand movement intention recognition based on sEMG aims to represent the user’s movement intention as specific physical parameters, which are then converted into control commands to achieve prosthetic hand movement control. Therefore, the following sections first summarize and classify the types of movement intentions involved in current research, and then review the discrete movement classification and continuous movement estimation methods separately.

3.1.1. Type of Motion Intention

This section categorizes hand movement intentions based on task complexity by examining public datasets or self-built datasets of EMG signals. We have divided existing studies into two experimental paradigms: discrete motion performed separately and combined motion performed continuously.
Discrete action performed separately: The experimental paradigm of discrete action performed separately requires the subjects to repeat a certain action multiple times in a single experiment, and the hand needs to return to the initial state between actions, making it easier to mark data segments while facilitating the learning of intention recognition models. This is currently the most explored type of movement intention, including widely used publicly available datasets such as Ninapro [74], which were designed based on this paradigm. In addition to basic unconstrained discrete hand movement recognition, some studies have attempted to explore constrained discrete action recognition involving object manipulation [12], aiming to achieve more natural and intuitive control of myoelectric prosthetic hands. Discrete action performed separately treats the initial state as a transition stage between action data, similar to normalizing the data, making the data distinction between different actions higher. However, it ignores the switching between actions [9], leading to high offline recognition accuracy but errors in online control during the transition stage, resulting in less-than-ideal results when applied to myoelectric prosthetic hand control in practical applications.
Combined action performed continuously: In the experimental paradigm of combined action performed continuously, there is no strict requirement for subjects to return to the initial state between two actions, allowing for more natural execution of different hand movements, combinations, or transitions. Some researchers have even attempted to explore allowing subjects to perform actions that were not previously defined based on their own will [27]. The continuous execution of combined actions undoubtedly obtains more human motion data that are closer to real-life activities. However, the difficulty of intention recognition tasks will also increase significantly. This is not only because there are more diverse types of actions, but also because the transitions between similar actions or the combinations of different actions make the recognition task more complex. Therefore, current researchers are still exploring how to effectively recognize the movement intentions of different action combinations or transitions [37].

3.1.2. Discrete Motion Classification

Discrete action treats the entire process of the user performing a hand movement as a single pattern, simplifying the representation of hand movement through static action by ignoring changes in hand structure during motion. Currently, the most advanced method for achieving discrete action classification is pattern recognition technology. Based on pattern recognition, discrete action classification tasks are designed according to predefined action categories, and corresponding motion labels are created for the sEMG data of each action. For new sEMG signals, the motion label with the highest similarity based on its data characteristics is assigned to complete the action classification task.
Several studies have reviewed the research progress on implementing discrete action classification tasks based on pattern recognition [75,76]. For discrete action classification tasks, the number of predefined motion types is crucial for achieving intuitive myoelectric prosthetic hand control. Hence, we select studies with more than 40 predefined motion types as the latest research advancements in this field. Parviz Ghaderi et al. [77] proposed three new sEMG features based on kernel density estimation to improve the classification accuracy of a large number of hand movements. They achieved an accuracy of 98.99 ± 1.36% in classification of 40 hand and wrist movements involving 40 able-bodied and 11 amputee subjects. Pizzolato et al. [78] conducted a comparative experiment on 6 existing data acquisition devices for 41 hand movement classification tasks. Although the best accuracy rate achieved was only 74.01 ± 7.59%, this study provides a reference solution for small-scale laboratories and pediatric prosthesis, given the cost constraints. Panyawut et al. [79] achieved high-precision recognition of 41 hand and wrist movements based on sEMG signals using deep neural networks, with an accuracy rate of up to 90%. According to their results, this is currently the study with the largest number of predefined motion types with an accuracy rate greater than 90%. Zhai et al. [80] implemented an effective self-recalibration function for myoelectric control by combining a CNN classifier with a simple label updating mechanism. Despite achieving a highest accuracy of only 61.7% in a 50-class hand gesture classification task, the label updating mechanism improved the classifier’s accuracy by 4.2 to 10.18%.
Implementing discrete action classification tasks based on pattern recognition reduces the difficulty of the task by simplifying the representation of hand movements while significantly improving the accuracy of traditional motion intention recognition schemes. However, this method itself has inherent drawbacks. Discrete action classification tasks rely on predefined labels assigned to the data and do not have actual physical meaning. In fact, the supervised learning architecture that depends on predefined labels lacks the ability to generalize to undefined categories [81]. In addition, experiments in laboratory environments usually only consider the static part of sEMG signals (time intervals where force is maintained at roughly constant levels without movement) for classification. However, the transitions between different movements are characterized by non-stationary signal components [3]. This is also the main reason for errors occurring in practical applications of pattern recognition-based classification systems. Therefore, predicting the transitional states between gestures based on pattern recognition methods is a key issue that needs to be urgently addressed.

3.1.3. Continuous Motion Estimation

In contrast to discrete action classification, continuous motion estimation tasks aim to capture the entire process of user hand movements, including the different completion stages of a single action and the transitional stages between different actions. This is a more complex dynamic process. Continuous motion estimation tasks do not rely on predefined action patterns, but instead estimate the hand state in real-time at a single moment, such as joint angles, positions, or torques. This feature allows for more intuitive and natural control in myoelectric prosthetic hands, making continuous motion estimation tasks a research hotspot in the field of myoelectric intention recognition in recent years.
Most research in continuous motion intention recognition begins by validating the feasibility of single DoF motion estimation tasks, before gradually improving to tasks that estimate two or more DoF. Kapelner et al. [31] used the discharge time of MUs determined by decomposing high-density sEMG to predict three-DoF joint angles of the wrist, separately. Their results among seven participants demonstrated that the neural features obtained from sEMG decomposition outperformed traditional time-domain features in motion estimation. Dai et al. [82] achieved motion estimation for the MCP joint of a single finger during movement in flexion and extension manners continuously without interruption. They improved the performance of the regression method for simulating natural finger movements by combining array sEMG sensors and independent component analysis. He et al. [60] extracted muscle synergies from sEMG data to achieve single-DoF joint angle prediction for the wrist, thumb, index finger, and middle finger. Their results demonstrated higher prediction accuracy compared to traditional musculoskeletal models and machine learning methods.
The complex and flexible human hand system largely benefits from its flexible five-finger multi-joint structure. Therefore, the simultaneous estimation of multiple-DoF motion is crucial for decoding hand motion intentions and is an inevitable trend in the development of motion intention recognition research. Yu Yang et al. achieved simultaneous estimation of two-DoF wrist motion by constructing 2D images of sEMG signals that involved the globally spatial information across channels [46], and their subsequent research further enhanced wrist torque estimation accuracy by introducing interactions between different motion units to construct specific motion unit images [35]. Zhang et al. [12] proposed a sparse pseudo-input Gaussian process regression method to achieve simultaneous estimation of the five-DoF motion of the MCP joint in functional grasping tasks. This method is beneficial for intuitive and accurate myocontrol of robotic hands. Yang et al. [33] successfully decoded complex wrist movements with three-DoF directly from raw sEMG signals. Their findings demonstrated the high accuracy of this method (superior to support vector regression) and its generalizability (able to be extended to able-bodied subjects without specific training).
The successful application of motion intention recognition methods in multi-DoF simultaneous motion estimation tasks represents a significant advancement in the study of the complex structure and flexible functionality of the human hand. This provides a theoretical foundation and feasibility for restoring hand function in amputees through prosthetic hands. However, due to the non-stationarity of sEMG signals and the complexity of and variability in human motion, the intrinsic physiological–physical relationship between sEMG and hand movements cannot be fully described [83], which leads to high difficulty in implementing continuous motion estimation. The estimated physical quantities are only approximations, and accuracy still needs to be improved.

3.2. Advances in Control Strategy Research

The intention decoder only focuses on accurately recognizing the user’s motion intentions. The research on control strategies is mainly focused on how to efficiently and intuitively execute the recognized intentions using myoelectric prosthetic hands. Based on the complexity of the control framework, current research on control strategies can be divided into three categories: unidirectional control, feedback control, and shared control, as shown in Figure 3. In this manuscript, we provide an overview of the progress made in each of these areas.

3.2.1. Unidirectional Control

Unidirectional control involves single-directional control of the myoelectric prosthetic hand, with the user’s motion intention as the sole control source. This can be achieved by directly mapping signal amplitude or signal characteristics to the activation or degree of motion of the prosthetic hand. This control strategy is the most straightforward and most widely used, and its effectiveness has been validated in commercial products [84,85,86].
Hahne et al. [32] have successfully applied regression methods to the unidirectional control of myoelectric prosthetic hand and explored the impact of different arm positions and time restrictions. The proposed method outperformed two clinical control methods in most cases and demonstrated robust performance over multiple days on five prosthetic users. Similarly, Domenico et al. [87] investigated the superiority of nonlinear regression classifiers for myoelectric unidirectional control, conducting experiments in which amputee subjects intuitively and simultaneously controlled the Hannes system. Pizza et al. [20] successfully combined probability-weighted regression with sEMG signals to simultaneously control the multiple DoF of prosthetic hands. The algorithm demonstrated excellent performance in the two-DoF case and enabled amputees to perform several daily tasks using a two-DoF wrist prosthesis. Lukyanenko et al. [14] proposed a stable unidirectional control strategy for myoelectric prosthetic hands based on a collaborative framework, achieving long-term use in 3-DOF control for up to 10 months and in 4-DOF control for up to 9 months.
In summary, the research on unidirectional control strategies for myoelectric prosthetic hands has been focused on the practical application of regression techniques to clinical practice with the aim of increasing the number of DoF that users can control simultaneously. The performance of the unidirectional control strategy is constrained by the insufficient sensory information available to the user, with users primarily relying on visual guidance and utilizing residual limb proprioception as a weak auxiliary aid. However, overreliance on vision can limit the control of prosthetic hands, which is one of the reasons for high abandonment rates of commercial prosthetic hands.

3.2.2. Feedback Control

Sensory feedback in prosthetic hands is a hot topic in current research that has experienced a sharp increase in the number of studies conducted in the past few years. Although previous studies have claimed that feedback control does not significantly improve performance or only enhances the performance of prosthetic hands in specific contexts [88], recent years have seen substantial research progress as researchers delve deeper into the fundamental role of feedback in prosthetic hand control. Due to inherent risks, there have been relatively few studies focused on invasive sensory feedback. This section primarily focuses on a review of non-invasive sensory feedback systems, such as mechanotactile, vibrotactile, electrotactile, and their combinational systems.
Several prior reviews have extensively summarized the methods employed in providing feedback to users of prosthetic hands [88,89,90]. Hence, we focus on the latest progress in the successful application of feedback control to prosthetic hand systems. Xu et al. [91] demonstrated the restoration of finger-specific tactile sensation through electrical stimulation, enabling amputees to successfully discriminate between finger-pressing states, object curvatures, and hardness. Their results demonstrated that amputee subjects were able to discriminate objects with varying curvature and hardness with an accuracy of over 90%. Shehata et al. [92] explored the use of an audio-guided feedback control strategy, which was found to be superior to the unidirectional control strategy in terms of path efficiency and other parameters. Their results from experiments demonstrated that the use of enhanced feedback control can improve both short-term and long-term performance of prosthetic hands. Li et al. [93] developed a portable electrical tactile stimulator and applied it in a virtual grasp scenario using myoelectric control. Their experimental results indicated that the success rate of grasping with electrical tactile feedback control was higher than that of the unidirectional control strategy. During heavy object grasping experiments, users are required to exert less effort, thereby effectively alleviating muscle fatigue associated with task performance. Cha et al. [94] proposed a new method for providing feedback on grasp information for robotic prosthetic hands and built a closed-loop integrated system for experimental validation consisting of an EMG classification model, the proposed tactile device, and the robotic prosthetic hand. The experimental results demonstrated that the application of this new feedback method could improve the recognition accuracy of proprioceptive feedback and had the potential to be applied in the feedback control of prosthetic hands.
The aforementioned studies have achieved closed-loop control of prosthetic hands through the integration of myoelectric control interfaces and artificial feedback. Their results show that feedback control not only improves the performance and practicality of prosthetic hands, but also enhances the user’s experience during interaction with their bionic limb. However, the benefits of feedback control have mainly been demonstrated in laboratory conditions rather than in daily operating scenarios, thus lacking sufficient reliability. In addition, integrating non-invasive feedback devices remains a challenge to be overcome, as their large size and the need to wear and remove them daily may be detrimental to the user experience.

3.2.3. Shared Control

For precise and repetitive tasks in structured environments, automated systems have higher processing efficiency. In unstructured environments, however, humans possess the ability to make quick judgments and adapt flexibly. However, due to the limited capacity and energy of humans, automated systems and humans need to collaborate in task execution. This mode of collaboration is known in the fields of robotics and neural engineering as “shared control”. The concept of shared control has been applied in many research fields, such as teleoperated robots [95,96], brain–machine interfaces [97,98], autonomous driving [99], and surgical assistive robots [100]. According to the latest review on shared control schemes applied to teleoperated robots [101], existing shared control schemes can be classified into semi-autonomous control (SAC), state-guided shared control (SGSC), and state-fusion shared control (SFSC) based on the different ways of sharing between human users and autonomous controllers. Although SGSC and SFSC, which offer richer interaction means and more intelligent operations, are the future development trend, SAC, which is relatively simple but more practical, more user-friendly, and better aligned with the design intention of shared control schemes to combine the advantages of human users and intelligent controllers, remains more popular and preferred for enhancing task performance and reducing user burden.
The application scenarios of prosthetic hands include both repetitive and intricate daily activities, as well as dexterous manipulation tasks. Therefore, prosthetic hands need to adopt a shared control approach to work collaboratively with human users. In this mode, prosthetic hands can respond to user commands and movements to achieve more natural control and higher precision. SAC is also the most commonly used shared control scheme in the field of prosthetic hands, and researchers have proposed a series of shared control schemes based on different perceptual methods. In this section, we mainly investigate the current state of research in visual perception and tactile perception.
Visual perception of prosthetic hands can provide information about the shape of objects, allowing autonomous controllers to select grasping types or adjust finger configurations accordingly. Shared control schemes for visual perception include the following. Mouchoux et al. [102] proposed a novel AR feedback semi-autonomous control scheme that not only improved the flexibility of EMG control, but also effectively improved user experience by shortening operation time and reducing muscle activity. Castro et al. [103] designed a shared control scheme by placing a depth sensor on the back of the prosthetic hand, which enabled online interaction between users and the prosthetic hand. Users were responsible for aiming at the whole or part of the object, and the control system continuously responded to the aimed target. Starke et al. [104] proposed a semi-autonomous control scheme based on visual object recognition to automatically select and execute grasping trajectories and wrist orientations. Their results indicated that, compared to traditional myoelectric control, the vision-based shared control strategy enabled a faster and less physically demanding grasping process. Federico et al. [105] developed an approach based on “hand-eye learning” to control hand pre-shaping and grasp the aperture before grasping, according to the input from a wrist-mounted camera. They considered different types of grasping that can be associated with different parts of objects and successfully achieved complete control effects on the Hannes prosthetic hand.
Tactile perception of prosthetic hands can assist in achieving functions such as contact detection, adaptive object shape, and force closure. Relevant research includes the following. Cipriani et al. [106] compared three types of tactile perception shared control schemes: fully autonomous control, semi-autonomous control, and direct user control, and found that users preferred the semi-autonomous control scheme, even though the control performances were about the same. Zhuang et al. [107] designed a shared control scheme based on tactile sensors placed on the inside of the prosthetic hand. The prosthetic hand performed the grasping action based on user intention and then automatically maximized the contact area between the hand and the object according to tactile feedback, effectively enhancing user endurance while assisting in achieving stable grasping. Seppich et al. [108] placed tactile sensors at the end of the prosthetic to perceive the shape and hardness of the object and provide feedback to the user to improve task performance. Their results demonstrated that the proposed approach successfully assisted amputees in performing tasks of screwing in a lightbulb and flipping cups. Mouchoux et al. [109] used a pressure sensor placed on the thumb to detect contact with the target object, enabling the autonomous controller to judge the current task execution stage and implement shared control of the prosthetic hand based on three different user intentions and the decision of the autonomous controller.
Shared control schemes aim to bridge the gap between user intention and task execution expectations by combining the user’s motion intention and decisions made by the prosthetic hand’s own perception. This can effectively overcome the inherent limitations of muscle myoelectric control schemes that rely solely on user intention. Moreover, these schemes significantly relieve the user’s burden, compensate for insufficient neural muscle function, and enhance the robustness of myoelectric control. The use of shared control has significant implications for the field of prosthetic hands.

4. Challenges and Opportunities

The above summary of the current state of myoelectric control research highlights a range of state-of-the-art technologies and applications. Although significant experimental results have been achieved, there are still the following challenges that need to be addressed:
  • Although there has been progress in decoding motion intentions for a variety of basic hand movements, there is a lack of functional motion intention decoding that facilitates prosthetic hand manipulation, which means that current prosthetic hands are only able to perform simple grasping tasks;
  • Existing myoelectric control research primarily focuses on basic hand grasping functions in humans (see Table 2), whereas more investigations are required to explore complex daily manipulation tasks that demand continuous manipulation and dynamic grasping force adjustment;
  • Prioritizing recognition and generalization capabilities while neglecting the high abandonment rate and subjective user experience of prosthetic hands is a flawed approach. During the processes of both myoelectric training and control, users need to exert a significant amount of attention and effort.
Therefore, we propose two potential research directions to address the aforementioned challenges, namely, functionality-augmented prosthetic hand and user burden reduction.

4.1. Functionality-Augmented Prosthetic Hands

Functionality-augmented prosthetic hands aim to restore basic hand functionality for amputees while also introducing new convenient features. Due to the complex mechanical structures and control methods, even the most advanced technology currently available cannot design prosthetic hands that are identical to human hands. As an advanced intelligent robot technology closely integrated with the human body, we believe that prosthetic hands should not only focus on restoring basic human hand functionality, but also on adding intelligent functions that only robotic hands can provide, rather than simply pursuing a perfect match to human hand functionality. This can not only restore the daily manipulation abilities lost by amputees, but also enable them to complete specific tasks more efficiently or even accomplish tasks beyond human hand functionality. Previous studies have indicated that enhanced hand functionality can have an impact on user hand neural representation and motor control abilities, with the potential for increased flexibility in use, reduced cognitive reliance, and increased proprioceptive sensation [115].
Frey et al. [116] presented an octopus-inspired bio-inspired neural system that can detect objects and automatically initiate adhesive contact. The proposed method was applied to a wearable glove for picking up and releasing various shaped underwater objects, including flat, curved, rigid, and soft objects. Chang et al. [117] introduced a humanoid prosthetic hand inspired by efficient swinging dynamics, which allowed users to control and maximize swinging velocity using the bio-inspired wrist mechanism design. Their results indicated that the proposed approach could increase the speed of the swing action by 19% at 90 rpm, meeting the demands of high load and high-speed swinging sports activities. However, for research on augmenting robot hand functionality, achieving coordinated motion control of the palm and fingers seems to be more attractive to users. Lee et al. [118] have proposed a novel robot palm with a dual-layered jamming mechanism, which automatically solidifies the palm by sensing internal pressure in the palm, enhancing grasping ability. Their results showed that, compared to a single-layer structure, the proposed palm could increase the contact surface area by 180% and increase the gripping force by 2–3.1 times. Heo et al. [119] proposed a bio-inspired triple-layered skin robot palm based on a porous latex structure; their results showed that the bio-inspired skin palm could firmly grasp the object while expanding the contact area due to its unique stiffness properties, demonstrating stronger grasping function and robustness to external interference in grasping tasks.
In summary, functionality-augmented prosthetic hands are one of the effective methods to overcome the current limitations of single-functionality in myoelectric prosthetic hands and are expected to bring more efficient and intelligent control for amputees. However, there are several challenges that functionality-augmented prosthetic hands still need to overcome before they can be widely applied:
  • The activation method of functionality-augmented technology must be intuitive and natural. If it requires complex pre-actions from the user, it will significantly increase their cognitive and control burden, such as requiring extensive long-term training. The multimodal human–machine interface for prosthetic hands may be an effective solution to this challenge [120,121]. It uses sEMG as the primary signal source, with other biological signals from the hand used as an auxiliary signal source to achieve natural and implicit control of functionality-augmented technology;
  • The hardware equipment that provides functionality-augmented technology needs to be highly integrated, ensuring that the overall volume and weight of the prosthetic hand remain within an acceptable range for the user;
  • Hand function augmentation may cause changes in the biological hand representation of the user, which is also a problem that needs to be addressed in functionality-augmented prosthetic hands. Functionality-augmented technology should not affect the user’s ability to control basic hand functions. Instead, it should produce a beneficial gain in the user’s own motion control capability, rather than a confusing adverse effect.

4.2. User Burden Reduction

The two key factors influencing the acceptance rate of prosthetic hands are intuitive control experiences and user burden [122,123]. However, current researchers focus more on how to develop myoelectric classifiers that include diverse gestures, rather than paying enough attention to the training and control burdens of users. We believe that reducing the training and control burdens on users is beneficial for improving the high abandonment rate of prosthetic hands in society and promoting the clinical application of myoelectric prosthetic hands by enhancing user subjective experience. The user control burden can be effectively reduced through intelligent control strategies, as detailed in Section 3.2. Therefore, the following discussion focuses on the user training burden.
Generally, myoelectric training data acquired under ideal laboratory conditions yield promising results in offline experiments. However, they often perform poorly in actual prosthesis applications due to the influence of hybrid factors in real-world scenarios [9]. These hybrid factors include non-ideal condition factors such as limb posture changes, skin perspiration, terminal load, muscle fatigue, long-term variations, or electrode shifts. Researchers have developed corresponding remedial measures based on different non-ideal condition factors: updating myoelectric models based on transfer learning to overcome electrode displacement or individual differences [124,125], enriching training datasets to address posture changes [126], and extracting EMG decomposition and synergistic features to overcome muscle fatigue [127,128], among others. A comprehensive review by Ziyou Li et al. [4] provides insights into the progress of myoelectric research under non-ideal conditions. This results in unsatisfactory myoelectric control despite heavy data acquisition tasks carried out by participants. Therefore, improving the quality and efficiency of training data acquisition tasks is crucial for reducing the training burden on users.
Improving myoelectric training paradigms is an effective method for reducing the user training burden. Dapeng Yang et al. [129] proposed a dynamic myoelectric training paradigm that explored the effects of upper limb movement, contraction level, and unintentional EMG activation on training data. Their results showed that the training paradigm, which involved dynamic upper limb postures and dynamic muscle contractions, achieved the most accurate and robust classifier. Morten et al. [130] explored the effects of restricting wrist and hand movements in able-bodied subjects to bridge the performance gap in myoelectric control between able-bodied and amputee subjects, and also tested the influence of arm posture. The experimental results suggested that restricting healthy limb movements is an effective training paradigm for improving training efficiency and reducing performance differences between able-bodied and amputee subjects. In addition, there have been similar paradigm improvement studies for other biosignal-driven prosthetic hands that have reference value for myoelectric training paradigms. Susannah et al. [131] explored the effects of different socket loads, arm positions, and motion patterns on training paradigms and verified, for the first time, the feasibility of using sonomyography to control prosthetic hands. Jiarong Wang et al. [132] improved the classic center-out paradigm in the field of EEG signal research, enhancing the training paradigm’s movement prediction performance and generalizability, significantly reducing subjects’ physical exertion.
In conclusion, we believe that improving myoelectric training paradigms is a research direction that needs to be vigorously pursued in the future. In comparison to myoelectric classifiers that require tens of hours or even days to obtain a rich set of gestures, users prefer classifiers that have a smaller training burden and are more stable. Additionally, we believe that exploring training paradigms based on continuous hand movements, rather than the traditional paradigm of extracting discrete movements, is a feasible direction for improving the manipulation performance of myoelectric control.

5. Conclusions

In this study, we conducted a comprehensive investigation of recent advances in myoelectric control for prosthetic hand manipulation, with a primary focus on intention recognition and control strategy research. We first briefly introduced the basic concepts of myoelectric control, including sEMG signal processing, decoding models, and mapping parameters, which are three important procedures. Then, we reviewed the current advances in intention recognition research from three perspectives: motion intent types, discrete motion classification, and continuous motion estimation. In addition, we also reviewed the current advances in control strategy research from three aspects: unidirectional control, feedback control, and shared control. Based on the above review, we proposed two future research directions that can overcome the current limitations of myoelectric control, namely, functionality-augmented prosthetic hands and user burden reduction. Improving the myoelectric control performance for prosthetic hand manipulation is beneficial for enhancing the clinical application potential of myoelectric prosthetic hands and increasing their widespread usage.

Author Contributions

Conceptualization, B.F. and H.M.; formal analysis, Z.X.; writing—original draft preparation, Z.C. and D.W.; writing—review and editing, Z.C. and D.W.; supervision, F.S. and H.M.; funding acquisition, B.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China (Grant No. 62173197) and the Tsinghua University Initiative Scientific Research Program with 2022Z11QYJ002.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xia, M.; Chen, C.; Sheng, X.; Zhu, X. On Detecting the Invariant Neural Drive to Muscles during Repeated Hand Motions: A Preliminary Study. In Proceedings of the 2021 27th International Conference on Mechatronics and Machine Vision in Practice (M2VIP), Shanghai, China, 26–28 November 2021; pp. 192–196. [Google Scholar]
  2. De Oliveira, D.S.; Casolo, A.; Balshaw, T.G.; Maeo, S.; Lanza, M.B.; Martin, N.R.; Maffulli, N.; Kinfe, T.M.; Eskofier, B.M.; Folland, J.P.; et al. Neural decoding from surface high-density EMG signals: Influence of anatomy and synchronization on the number of identified motor units. J. Neural Eng. 2022, 19, 046029. [Google Scholar] [CrossRef] [PubMed]
  3. Farina, D.; Jiang, N.; Rehbaum, H.; Holobar, A.; Graimann, B.; Dietl, H.; Aszmann, O.C. The extraction of neural information from the surface EMG for the control of upper-limb prostheses: Emerging avenues and challenges. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 797–809. [Google Scholar] [CrossRef]
  4. Zi-You, L.; Xin-Gang, Z.; Bi, Z.; Qi-Chuan, D.; Dao-Hui, Z.; Jian-Da, H. Review of sEMG-based motion intent recognition methods in non-ideal conditions. Acta Autom. Sin. 2021, 47, 955–969. [Google Scholar]
  5. Konrad, P. The ABC of EMG: A Practical Introduction to Kinesiological Electromyography; Noraxon Inc.: Scottsdale, AZ, USA, 2005. [Google Scholar]
  6. Englehart, K.; Hudgins, B. A robust, real-time control scheme for multifunction myoelectric control. IEEE Trans. Biomed. Eng. 2003, 50, 848–854. [Google Scholar] [CrossRef] [PubMed]
  7. Sartori, M.; Durandau, G.; Došen, S.; Farina, D. Robust simultaneous myoelectric control of multiple degrees of freedom in wrist-hand prostheses by real-time neuromusculoskeletal modeling. J. Neural Eng. 2018, 15, 066026. [Google Scholar] [CrossRef] [Green Version]
  8. Bi, L.; Guan, C. A review on EMG-based motor intention prediction of continuous human upper limb motion for human-robot collaboration. Biomed. Signal Process. Control 2019, 51, 113–127. [Google Scholar] [CrossRef]
  9. Xiong, D.; Zhang, D.; Zhao, X.; Zhao, Y. Deep learning for EMG-based human-machine interaction: A review. IEEE/CAA J. Autom. Sin. 2021, 8, 512–533. [Google Scholar] [CrossRef]
  10. Mohebbian, M.R.; Nosouhi, M.; Fazilati, F.; Esfahani, Z.N.; Amiri, G.; Malekifar, N.; Yusefi, F.; Rastegari, M.; Marateb, H.R. A Comprehensive Review of Myoelectric Prosthesis Control. arXiv 2021, arXiv:2112.13192. [Google Scholar]
  11. Cordella, F.; Ciancio, A.L.; Sacchetti, R.; Davalli, A.; Cutti, A.G.; Guglielmelli, E.; Zollo, L. Literature review on needs of upper limb prosthesis users. Front. Neurosci. 2016, 10, 209. [Google Scholar] [CrossRef]
  12. Zhang, Q.; Pi, T.; Liu, R.; Xiong, C. Simultaneous and proportional estimation of multijoint kinematics from EMG signals for myocontrol of robotic hands. IEEE/ASME Trans. Mech. 2020, 25, 1953–1960. [Google Scholar] [CrossRef]
  13. Dantas, H.; Warren, D.J.; Wendelken, S.M.; Davis, T.S.; Clark, G.A.; Mathews, V.J. Deep learning movement intent decoders trained with dataset aggregation for prosthetic limb control. IEEE Trans. Biomed. Eng. 2019, 66, 3192–3203. [Google Scholar] [CrossRef]
  14. Lukyanenko, P.; Dewald, H.A.; Lambrecht, J.; Kirsch, R.F.; Tyler, D.J.; Williams, M.R. Stable, simultaneous and proportional 4-DoF prosthetic hand control via synergy-inspired linear interpolation: A case series. J. Neuroeng. Rehabil. 2021, 18, 50. [Google Scholar] [CrossRef] [PubMed]
  15. Lin, C.; Wang, B.; Jiang, N.; Farina, D. Robust extraction of basis functions for simultaneous and proportional myoelectric control via sparse non-negative matrix factorization. J. Neural Eng. 2018, 15, 026017. [Google Scholar] [CrossRef] [PubMed]
  16. Hu, X.; Zeng, H.; Chen, D.; Zhu, J.; Song, A. Real-time continuous hand motion myoelectric decoding by automated data labeling. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 6951–6957. [Google Scholar]
  17. Guo, W.; Ma, C.; Wang, Z.; Zhang, H.; Farina, D.; Jiang, N.; Lin, C. Long exposure convolutional memory network for accurate estimation of finger kinematics from surface electromyographic signals. J. Neural Eng. 2021, 18, 026027. [Google Scholar] [CrossRef] [PubMed]
  18. Dwivedi, A.; Kwon, Y.; McDaid, A.J.; Liarokapis, M. A learning scheme for EMG based decoding of dexterous, in-hand manipulation motions. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 2205–2215. [Google Scholar] [CrossRef] [PubMed]
  19. Ma, C.; Guo, W.; Zhang, H.; Samuel, O.W.; Ji, X.; Xu, L.; Li, G. A novel and efficient feature extraction method for deep learning based continuous estimation. IEEE Robot. Autom. Lett. 2021, 6, 7341–7348. [Google Scholar] [CrossRef]
  20. Piazza, C.; Rossi, M.; Catalano, M.G.; Bicchi, A.; Hargrove, L.J. Evaluation of a simultaneous myoelectric control strategy for a multi-DoF transradial prosthesis. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 2286–2295. [Google Scholar] [CrossRef]
  21. Winters, J.M. Hill-based muscle models: A systems engineering perspective. In Multiple Muscle Systems: Biomechanics and Movement Organization; Springer: Berlin/Heidelberg, Germany, 1990; pp. 69–93. [Google Scholar]
  22. Pan, L.; Crouch, D.L.; Huang, H. Myoelectric control based on a generic musculoskeletal model: Toward a multi-user neural-machine interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 1435–1442. [Google Scholar] [CrossRef]
  23. Shin, D.; Kim, J.; Koike, Y. A myokinetic arm model for estimating joint torque and stiffness from EMG signals during maintained posture. J. Neurophysiol. 2009, 101, 387–401. [Google Scholar] [CrossRef] [Green Version]
  24. Stapornchaisit, S.; Kim, Y.; Takagi, A.; Yoshimura, N.; Koike, Y. Finger angle estimation from array EMG system using linear regression model with independent component analysis. Front. Neurorobot. 2019, 13, 75. [Google Scholar] [CrossRef]
  25. Crouch, D.L.; Huang, H. Lumped-parameter electromyogram-driven musculoskeletal hand model: A potential platform for real-time prosthesis control. J. Biomech. 2016, 49, 3901–3907. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Zhao, J.; Yu, Y.; Wang, X.; Ma, S.; Sheng, X.; Zhu, X. A musculoskeletal model driven by muscle synergy-derived excitations for hand and wrist movements. J. Neural Eng. 2022, 19, 016027. [Google Scholar] [CrossRef] [PubMed]
  27. Ngeo, J.G.; Tamei, T.; Shibata, T. Continuous and simultaneous estimation of finger kinematics using inputs from an EMG-to-muscle activation model. J. Neuroeng. Rehabil. 2014, 11, 122. [Google Scholar] [CrossRef] [Green Version]
  28. Xiloyannis, M.; Gavriel, C.; Thomik, A.A.; Faisal, A.A. Gaussian process autoregression for simultaneous proportional multi-modal prosthetic control with natural hand kinematics. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1785–1801. [Google Scholar] [CrossRef] [PubMed]
  29. Jiang, N.; Englehart, K.B.; Parker, P.A. Extracting simultaneous and proportional neural control information for multiple-DOF prostheses from the surface electromyographic signal. IEEE Trans. Biomed. Eng. 2008, 56, 1070–1080. [Google Scholar] [CrossRef]
  30. Ison, M.; Vujaklija, I.; Whitsell, B.; Farina, D.; Artemiadis, P. Simultaneous myoelectric control of a robot arm using muscle synergy-inspired inputs from high-density electrode grids. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 6469–6474. [Google Scholar]
  31. Kapelner, T.; Vujaklija, I.; Jiang, N.; Negro, F.; Aszmann, O.C.; Principe, J.; Farina, D. Predicting wrist kinematics from motor unit discharge timings for the control of active prostheses. J. Neuroeng. Rehabil. 2019, 16, 47. [Google Scholar] [CrossRef] [Green Version]
  32. Hahne, J.M.; Schweisfurth, M.A.; Koppe, M.; Farina, D. Simultaneous control of multiple functions of bionic hand prostheses: Performance and robustness in end users. Sci. Robot. 2018, 3, eaat3630. [Google Scholar] [CrossRef]
  33. Yang, W.; Yang, D.; Liu, Y.; Liu, H. Decoding simultaneous multi-DOF wrist movements from raw EMG signals using a convolutional neural network. IEEE Trans. Hum. Mach. Syst. 2019, 49, 411–420. [Google Scholar] [CrossRef]
  34. Ameri, A.; Akhaee, M.A.; Scheme, E.; Englehart, K. Regression convolutional neural network for improved simultaneous EMG control. J. Neural Eng. 2019, 16, 036015. [Google Scholar] [CrossRef]
  35. Yu, Y.; Chen, C.; Sheng, X.; Zhu, X. Wrist torque estimation via electromyographic motor unit decomposition and image reconstruction. IEEE J. Biomed. Health Inform. 2020, 25, 2557–2566. [Google Scholar] [CrossRef]
  36. Qin, Z.; Stapornchaisit, S.; He, Z.; Yoshimura, N.; Koike, Y. Multi–Joint Angles Estimation of Forearm Motion Using a Regression Model. Front. Neurorobot. 2021, 15, 685961. [Google Scholar] [CrossRef] [PubMed]
  37. Ma, C.; Lin, C.; Samuel, O.W.; Guo, W.; Zhang, H.; Greenwald, S.; Xu, L.; Li, G. A bi-directional LSTM network for estimating continuous upper limb movement from surface electromyography. IEEE Robot. Autom. Lett. 2021, 6, 7217–7224. [Google Scholar] [CrossRef]
  38. Hu, X.; Zeng, H.; Song, A.; Chen, D. Robust continuous hand motion recognition using wearable array myoelectric sensor. IEEE Sens. J. 2021, 21, 20596–20605. [Google Scholar] [CrossRef]
  39. Salatiello, A.; Giese, M.A. Continuous Decoding of Daily-Life Hand Movements from Forearm Muscle Activity for Enhanced Myoelectric Control of Hand Prostheses. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021; pp. 1–8. [Google Scholar]
  40. Bao, T.; Zaidi, S.A.R.; Xie, S.; Yang, P.; Zhang, Z.Q. A CNN-LSTM hybrid model for wrist kinematics estimation using surface electromyography. IEEE Trans. Instrum. Meas. 2020, 70, 2503809. [Google Scholar] [CrossRef]
  41. Ma, C.; Lin, C.; Samuel, O.W.; Xu, L.; Li, G. Continuous estimation of upper limb joint angle from sEMG signals based on SCA-LSTM deep learning approach. Biomed. Signal Process. Control 2020, 61, 102024. [Google Scholar] [CrossRef]
  42. Bao, T.; Zhao, Y.; Zaidi, S.A.R.; Xie, S.; Yang, P.; Zhang, Z. A deep Kalman filter network for hand kinematics estimation using sEMG. Pattern Recognit. Lett. 2021, 143, 88–94. [Google Scholar] [CrossRef]
  43. Chen, C.; Guo, W.; Ma, C.; Yang, Y.; Wang, Z.; Lin, C. sEMG-based continuous estimation of finger kinematics via large-scale temporal convolutional network. Appl. Sci. 2021, 11, 4678. [Google Scholar] [CrossRef]
  44. Raj, R.; Rejith, R.; Sivanandan, K. Real time identification of human forearm kinematics from surface EMG signal using artificial neural network models. Procedia Technol. 2016, 25, 44–51. [Google Scholar] [CrossRef] [Green Version]
  45. Nasr, A.; Bell, S.; He, J.; Whittaker, R.L.; Jiang, N.; Dickerson, C.R.; McPhee, J. MuscleNET: Mapping electromyography to kinematic and dynamic biomechanical variables by machine learning. J. Neural Eng. 2021, 18, 0460d3. [Google Scholar] [CrossRef]
  46. Yu, Y.; Chen, C.; Zhao, J.; Sheng, X.; Zhu, X. Surface electromyography image-driven torque estimation of multi-DoF wrist movements. IEEE Trans. Ind. Electron. 2021, 69, 795–804. [Google Scholar] [CrossRef]
  47. Kim, D.; Koh, K.; Oppizzi, G.; Baghi, R.; Lo, L.C.; Zhang, C.; Zhang, L.Q. Simultaneous estimations of joint angle and torque in interactions with environments using EMG. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 3818–3824. [Google Scholar]
  48. Chen, C.; Yu, Y.; Sheng, X.; Zhu, X. Non-invasive analysis of motor unit activation during simultaneous and continuous wrist movements. IEEE J. Biomed. Health Inform. 2021, 26, 2106–2115. [Google Scholar] [CrossRef] [PubMed]
  49. Yang, D.; Liu, H. An EMG-based deep learning approach for multi-DOF wrist movement decoding. IEEE Trans. Ind. Electron. 2021, 69, 7099–7108. [Google Scholar] [CrossRef]
  50. Yang, W.; Yang, D.; Liu, Y.; Liu, H. A 3-DOF hemi-constrained wrist motion/force detection device for deploying simultaneous myoelectric control. Med. Biol. Eng. Comput. 2018, 56, 1669–1681. [Google Scholar] [CrossRef]
  51. Dwivedi, A.; Lara, J.; Cheng, L.K.; Paskaranandavadivel, N.; Liarokapis, M. High-density electromyography based control of robotic devices: On the execution of dexterous manipulation tasks. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 3825–3831. [Google Scholar]
  52. Chen, C.; Yu, Y.; Sheng, X.; Farina, D.; Zhu, X. Simultaneous and proportional control of wrist and hand movements by decoding motor unit discharges in real time. J. Neural Eng. 2021, 18, 056010. [Google Scholar] [CrossRef] [PubMed]
  53. Hermens, H.J.; Freriks, B.; Disselhorst-Klug, C.; Rau, G. Development of recommendations for SEMG sensors and sensor placement procedures. J. Electromyogr. Kinesiol. 2000, 10, 361–374. [Google Scholar] [CrossRef] [PubMed]
  54. Drake, J.D.; Callaghan, J.P. Elimination of electrocardiogram contamination from electromyogram signals: An evaluation of currently used removal techniques. J. Electromyogr. Kinesiol. 2006, 16, 175–187. [Google Scholar] [CrossRef]
  55. Reaz, M.B.I.; Hussain, M.S.; Mohd-Yasin, F. Techniques of EMG signal analysis: Detection, processing, classification and applications. Biol. Proced. Online 2006, 8, 11–35. [Google Scholar] [CrossRef] [Green Version]
  56. Bengio, Y.; Courville, A.; Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef] [Green Version]
  57. Oskoei, M.A.; Hu, H. Myoelectric control systems—A survey. Biomed. Signal Process. Control 2007, 2, 275–294. [Google Scholar] [CrossRef]
  58. Turner, A.; Shieff, D.; Dwivedi, A.; Liarokapis, M. Comparing machine learning methods and feature extraction techniques for the emg based decoding of human intention. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Guadalajara, Mexico, 1–5 November 2021; pp. 4738–4743. [Google Scholar]
  59. Phinyomark, A.; Phukpattaranont, P.; Limsakul, C. Feature reduction and selection for EMG signal classification. Expert Syst. Appl. 2012, 39, 7420–7431. [Google Scholar] [CrossRef]
  60. He, Z.; Qin, Z.; Koike, Y. Continuous estimation of finger and wrist joint angles using a muscle synergy based musculoskeletal model. Appl. Sci. 2022, 12, 3772. [Google Scholar] [CrossRef]
  61. Zhao, Y.; Zhang, Z.; Li, Z.; Yang, Z.; Dehghani-Sanij, A.A.; Xie, S. An EMG-driven musculoskeletal model for estimating continuous wrist motion. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 3113–3120. [Google Scholar] [CrossRef] [PubMed]
  62. Kawase, T.; Sakurada, T.; Koike, Y.; Kansaku, K. A hybrid BMI-based exoskeleton for paresis: EMG control for assisting arm movements. J. Neural Eng. 2017, 14, 016015. [Google Scholar] [CrossRef] [Green Version]
  63. Buchanan, T.S.; Lloyd, D.G.; Manal, K.; Besier, T.F. Neuromusculoskeletal modeling: Estimation of muscle forces and joint moments and movements from measurements of neural command. J. Appl. Biomech. 2004, 20, 367–395. [Google Scholar] [CrossRef] [Green Version]
  64. Inkol, K.A.; Brown, C.; McNally, W.; Jansen, C.; McPhee, J. Muscle torque generators in multibody dynamic simulations of optimal sports performance. Multibody Syst. Dyn. 2020, 50, 435–452. [Google Scholar] [CrossRef]
  65. Pan, L.; Crouch, D.L.; Huang, H. Comparing EMG-based human-machine interfaces for estimating continuous, coordinated movements. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 2145–2154. [Google Scholar] [CrossRef]
  66. Ahsan, M.R.; Ibrahimy, M.I.; Khalifa, O.O. EMG signal classification for human computer interaction: A review. Eur. J. Sci. Res. 2009, 33, 480–501. [Google Scholar]
  67. Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
  68. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
  69. Jarque-Bou, N.J.; Sancho-Bru, J.L.; Vergara, M. A systematic review of emg applications for the characterization of forearm and hand muscle activity during activities of daily living: Results, challenges, and open issues. Sensors 2021, 21, 3035. [Google Scholar] [CrossRef] [PubMed]
  70. Sartori, M.; Llyod, D.G.; Farina, D. Neural data-driven musculoskeletal modeling for personalized neurorehabilitation technologies. IEEE Trans. Biomed. Eng. 2016, 63, 879–893. [Google Scholar] [CrossRef] [Green Version]
  71. Todorov, E.; Ghahramani, Z. Analysis of the synergies underlying complex hand manipulation. In IEEE Engineering in Medicine and Biology Magazine; IEEE: New York, NY, USA, 2004; Volume 2, pp. 4637–4640. [Google Scholar]
  72. Suzuki, M.; Shiller, D.M.; Gribble, P.L.; Ostry, D.J. Relationship between cocontraction, movement kinematics and phasic muscle activity in single-joint arm movement. Exp. Brain Res. 2001, 140, 171–181. [Google Scholar] [CrossRef]
  73. Jiang, N.; Vujaklija, I.; Rehbaum, H.; Graimann, B.; Farina, D. Is accurate mapping of EMG signals on kinematics needed for precise online myoelectric control? IEEE Trans. Neural Syst. Rehabil. Eng. 2013, 22, 549–558. [Google Scholar] [CrossRef]
  74. Atzori, M.; Gijsberts, A.; Castellini, C.; Caputo, B.; Hager, A.G.M.; Elsig, S.; Giatsidis, G.; Bassetto, F.; Müller, H. Electromyography data for non-invasive naturally-controlled robotic hand prostheses. Sci. Data 2014, 1, 140053. [Google Scholar] [CrossRef] [Green Version]
  75. Simão, M.; Mendes, N.; Gibaru, O.; Neto, P. A review on electromyography decoding and pattern recognition for human-machine interaction. IEEE Access 2019, 7, 39564–39582. [Google Scholar] [CrossRef]
  76. Asghar, A.; Jawaid Khan, S.; Azim, F.; Shakeel, C.S.; Hussain, A.; Niazi, I.K. Review on electromyography based intention for upper limb control using pattern recognition for human-machine interaction. J. Eng. Med. 2022, 236, 628–645. [Google Scholar] [CrossRef] [PubMed]
  77. Ghaderi, P.; Nosouhi, M.; Jordanic, M.; Marateb, H.R.; Mañanas, M.A.; Farina, D. Kernel density estimation of electromyographic signals and ensemble learning for highly accurate classification of a large set of hand/wrist motions. Front. Neurosci. 2022, 16, 796711. [Google Scholar] [CrossRef]
  78. Pizzolato, S.; Tagliapietra, L.; Cognolato, M.; Reggiani, M.; Müller, H.; Atzori, M. Comparison of six electromyography acquisition setups on hand movement classification tasks. PloS ONE 2017, 12, e0186132. [Google Scholar] [CrossRef] [Green Version]
  79. Sri-Iesaranusorn, P.; Chaiyaroj, A.; Buekban, C.; Dumnin, S.; Pongthornseri, R.; Thanawattano, C.; Surangsrirat, D. Classification of 41 hand and wrist movements via surface electromyogram using deep neural network. Front. Bioeng. Biotechnol. 2021, 9, 548357. [Google Scholar] [CrossRef]
  80. Zhai, X.; Jelfs, B.; Chan, R.H.; Tin, C. Self-recalibrating surface EMG pattern recognition for neuroprosthesis control based on convolutional neural network. Front. Neurosci. 2017, 11, 379. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  81. Ghazaei, G.; Alameer, A.; Degenaar, P.; Morgan, G.; Nazarpour, K. Deep learning-based artificial vision for grasp classification in myoelectric hands. J. Neural Eng. 2017, 14, 036025. [Google Scholar] [CrossRef] [PubMed]
  82. Dai, C.; Hu, X. Finger joint angle estimation based on motoneuron discharge activities. IEEE J. Biomed. Health Inform. 2019, 24, 760–767. [Google Scholar] [CrossRef]
  83. Ding, Q.; Xiong, A.; Zhao, X.; Han, J. A review on researches and applications of sEMG-based motion intent recognition methods. Acta Autom. Sin. 2016, 42, 13–25. [Google Scholar]
  84. Luchetti, M.; Cutti, A.G.; Verni, G.; Sacchetti, R.; Rossi, N. Impact of Michelangelo prosthetic hand: Findings from a crossover longitudinal study. J. Rehabil. Res. Dev. 2015, 52, 605–618. [Google Scholar] [CrossRef]
  85. Van Der Niet Otr, O.; Reinders-Messelink, H.A.; Bongers, R.M.; Bouwsema, H.; Van Der Sluis, C.K. The i-LIMB hand and the DMC plus hand compared: A case report. Prosthetics Orthot. Int. 2010, 34, 216–220. [Google Scholar] [CrossRef] [Green Version]
  86. Belter, J.T.; Segil, J.L.; SM, B. Mechanical design and performance specifications of anthropomorphic prosthetic hands: A review. J. Rehabil. Res. Dev. 2013, 50, 599. [Google Scholar] [CrossRef]
  87. Di Domenico, D.; Marinelli, A.; Boccardo, N.; Semprini, M.; Lombardi, L.; Canepa, M.; Stedman, S.; Bellingegni, A.D.; Chiappalone, M.; Gruppioni, E.; et al. Hannes prosthesis control based on regression machine learning algorithms. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 5997–6002. [Google Scholar]
  88. Sensinger, J.W.; Dosen, S. A review of sensory feedback in upper-limb prostheses from the perspective of human motor control. Front. Neurosci. 2020, 14, 345. [Google Scholar] [CrossRef]
  89. Kim, K. A review of haptic feedback through peripheral nerve stimulation for upper extremity prosthetics. Curr. Opin. Biomed. Eng. 2022, 21, 100368. [Google Scholar] [CrossRef]
  90. Stephens-Fripp, B.; Alici, G.; Mutlu, R. A review of non-invasive sensory feedback methods for transradial prosthetic hands. IEEE Access 2018, 6, 6878–6899. [Google Scholar] [CrossRef]
  91. Xu, H.; Chai, G.; Zhang, N.; Gu, G. Restoring finger-specific tactile sensations with a sensory soft neuroprosthetic hand through electrotactile stimulation. Soft Sci. 2022, 2, 19. [Google Scholar] [CrossRef]
  92. Shehata, A.W.; Scheme, E.J.; Sensinger, J.W. Audible feedback improves internal model strength and performance of myoelectric prosthesis control. Sci. Rep. 2018, 8, 8541. [Google Scholar] [CrossRef] [Green Version]
  93. Li, K.; Zhou, Y.; Zhou, D.; Zeng, J.; Fang, Y.; Yang, J.; Liu, H. Electrotactile Feedback-Based Muscle Fatigue Alleviation for Hand Manipulation. Int. J. Humanoid Robot. 2021, 18, 2050024. [Google Scholar] [CrossRef]
  94. Cha, H.; An, S.; Choi, S.; Yang, S.; Park, S.; Park, S. Study on Intention Recognition and Sensory Feedback: Control of Robotic Prosthetic Hand Through EMG Classification and Proprioceptive Feedback Using Rule-based Haptic Device. IEEE Trans. Haptics 2022, 15, 560–571. [Google Scholar] [CrossRef]
  95. Dwivedi, A.; Shieff, D.; Turner, A.; Gorjup, G.; Kwon, Y.; Liarokapis, M. A shared control framework for robotic telemanipulation combining electromyography based motion estimation and compliance control. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 9467–9473. [Google Scholar]
  96. Fang, B.; Ma, X.; Wang, J.; Sun, F. Vision-based posture-consistent teleoperation of robotic arm using multi-stage deep neural network. Robot. Auton. Syst. 2020, 131, 103592. [Google Scholar] [CrossRef]
  97. Fang, B.; Ding, W.; Sun, F.; Shan, J.; Wang, X.; Wang, C.; Zhang, X. Brain-computer interface integrated with augmented reality for human-robot interaction. IEEE Trans. Cogn. Dev. Syst. 2022, 1. [Google Scholar] [CrossRef]
  98. Gillini, G.; Di Lillo, P.; Arrichiello, F. An assistive shared control architecture for a robotic arm using eeg-based bci with motor imagery. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 4132–4137. [Google Scholar]
  99. Abbink, D.A.; Carlson, T.; Mulder, M.; De Winter, J.C.; Aminravan, F.; Gibo, T.L.; Boer, E.R. A topology of shared control systems—Finding common ground in diversity. IEEE Trans. Hum. Mach. Syst. 2018, 48, 509–525. [Google Scholar] [CrossRef] [Green Version]
  100. Zhou, T.; Wachs, J.P. Early prediction for physical human robot collaboration in the operating room. Auton. Robot. 2018, 42, 977–995. [Google Scholar] [CrossRef] [Green Version]
  101. Li, G.; Li, Q.; Yang, C.; Su, Y.; Yuan, Z.; Wu, X. The Classification and New Trends of Shared Control Strategies in Telerobotic Systems: A Survey. IEEE Trans. Haptics 2023, 16, 118–133. [Google Scholar] [CrossRef]
  102. Mouchoux, J.; Carisi, S.; Dosen, S.; Farina, D.; Schilling, A.F.; Markovic, M. Artificial perception and semiautonomous control in myoelectric hand prostheses increases performance and decreases effort. IEEE Trans. Robot. 2021, 37, 1298–1312. [Google Scholar] [CrossRef]
  103. Castro, M.N.; Dosen, S. Continuous Semi-autonomous Prosthesis Control Using a Depth Sensor on the Hand. Front. Neurorobot. 2022, 16, 814973. [Google Scholar] [CrossRef]
  104. Starke, J.; Weiner, P.; Crell, M.; Asfour, T. Semi-autonomous control of prosthetic hands based on multimodal sensing, human grasp demonstration and user intention. Robot. Auton. Syst. 2022, 154, 104123. [Google Scholar] [CrossRef]
  105. Vasile, F.; Maiettini, E.; Pasquale, G.; Florio, A.; Boccardo, N.; Natale, L. Grasp Pre-shape Selection by Synthetic Training: Eye-in-hand Shared Control on the Hannes Prosthesis. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 13112–13119. [Google Scholar]
  106. Cipriani, C.; Zaccone, F.; Micera, S.; Carrozza, M.C. On the shared control of an EMG-controlled prosthetic hand: Analysis of user–prosthesis interaction. IEEE Trans. Robot. 2008, 24, 170–184. [Google Scholar] [CrossRef]
  107. Zhuang, K.Z.; Sommer, N.; Mendez, V.; Aryan, S.; Formento, E.; D’Anna, E.; Artoni, F.; Petrini, F.; Granata, G.; Cannaviello, G.; et al. Shared human–robot proportional control of a dexterous myoelectric prosthesis. Nat. Mach. Intell. 2019, 1, 400–411. [Google Scholar] [CrossRef]
  108. Seppich, N.; Tacca, N.; Chao, K.Y.; Akim, M.; Hidalgo-Carvajal, D.; Pozo Fortunić, E.; Tödtheide, A.; Kühn, J.; Haddadin, S. CyberLimb: A novel robotic prosthesis concept with shared and intuitive control. J. Neuroeng. Rehabil. 2022, 19, 41. [Google Scholar] [CrossRef]
  109. Mouchoux, J.; Bravo-Cabrera, M.A.; Dosen, S.; Schilling, A.F.; Markovic, M. Impact of shared control modalities on performance and usability of semi-autonomous prostheses. Front. Neurorobot. 2021, 15, 172. [Google Scholar] [CrossRef] [PubMed]
  110. Furui, A.; Eto, S.; Nakagaki, K.; Shimada, K.; Nakamura, G.; Masuda, A.; Chin, T.; Tsuji, T. A myoelectric prosthetic hand with muscle synergy–based motion determination and impedance model–based biomimetic control. Sci. Robot. 2019, 4, eaaw6339. [Google Scholar] [CrossRef]
  111. Wang, Y.; Tian, Y.; She, H.; Jiang, Y.; Yokoi, H.; Liu, Y. Design of an effective prosthetic hand system for adaptive grasping with the control of myoelectric pattern recognition approach. Micromachines 2022, 13, 219. [Google Scholar] [CrossRef] [PubMed]
  112. Shi, C.; Yang, D.; Zhao, J.; Jiang, L. i-MYO: A Hybrid Prosthetic Hand Control System based on Eye-tracking, Augmented Reality and Myoelectric signal. arXiv 2022, arXiv:2205.08948. [Google Scholar]
  113. Luo, Q.; Niu, C.M.; Chou, C.H.; Liang, W.; Deng, X.; Hao, M.; Lan, N. Biorealistic control of hand prosthesis augments functional performance of individuals with amputation. Front. Neurosci. 2021, 15, 1668. [Google Scholar] [CrossRef]
  114. Volkmar, R.; Dosen, S.; Gonzalez-Vargas, J.; Baum, M.; Markovic, M. Improving bimanual interaction with a prosthesis using semi-autonomous control. J. Neuroeng. Rehabil. 2019, 16, 140. [Google Scholar] [CrossRef]
  115. Kieliba, P.; Clode, D.; Maimon-Mor, R.O.; Makin, T.R. Robotic hand augmentation drives changes in neural body representation. Sci. Robot. 2021, 6, eabd7935. [Google Scholar] [CrossRef]
  116. Frey, S.T.; Haque, A.T.; Tutika, R.; Krotz, E.V.; Lee, C.; Haverkamp, C.B.; Markvicka, E.J.; Bartlett, M.D. Octopus-inspired adhesive skins for intelligent and rapidly switchable underwater adhesion. Sci. Adv. 2022, 8, eabq1905. [Google Scholar] [CrossRef]
  117. Chang, M.H.; Kim, D.H.; Kim, S.H.; Lee, Y.; Cho, S.; Park, H.S.; Cho, K.J. Anthropomorphic prosthetic hand inspired by efficient swing mechanics for sports activities. IEEE/ASME Trans. Mech. 2021, 27, 1196–1207. [Google Scholar] [CrossRef]
  118. Lee, J.; Kim, J.; Park, S.; Hwang, D.; Yang, S. Soft robotic palm with tunable stiffness using dual-layered particle jamming mechanism. IEEE/ASME Trans. Mech. 2021, 26, 1820–1827. [Google Scholar] [CrossRef]
  119. Heo, S.H.; Kim, C.; Kim, T.S.; Park, H.S. Human-palm-inspired artificial skin material enhances operational functionality of hand manipulation. Adv. Funct. Mater. 2020, 30, 2002360. [Google Scholar] [CrossRef]
  120. Zhou, H.; Alici, G. Non-invasive human-machine interface (hmi) systems with hybrid on-body sensors for controlling upper-limb prosthesis: A review. IEEE Sens. J. 2022, 22, 10292–10307. [Google Scholar] [CrossRef]
  121. Xue, Y.; Ju, Z.; Xiang, K.; Chen, J.; Liu, H. Multiple sensors based hand motion recognition using adaptive directed acyclic graph. Appl. Sci. 2017, 7, 358. [Google Scholar] [CrossRef] [Green Version]
  122. Biddiss, E.; Beaton, D.; Chau, T. Consumer design priorities for upper limb prosthetics. Disabil. Rehabil. Assist. Technol. 2007, 2, 346–357. [Google Scholar] [CrossRef]
  123. Jang, C.H.; Yang, H.S.; Yang, H.E.; Lee, S.Y.; Kwon, J.W.; Yun, B.D.; Choi, J.Y.; Kim, S.N.; Jeong, H.W. A survey on activities of daily living and occupations of upper extremity amputees. Ann. Rehabil. Med. 2011, 35, 907–921. [Google Scholar] [CrossRef]
  124. Prahm, C.; Schulz, A.; Paaßen, B.; Schoisswohl, J.; Kaniusas, E.; Dorffner, G.; Hammer, B.; Aszmann, O. Counteracting electrode shifts in upper-limb prosthesis control via transfer learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 956–962. [Google Scholar] [CrossRef] [Green Version]
  125. Côté-Allard, U.; Fall, C.L.; Drouin, A.; Campeau-Lecours, A.; Gosselin, C.; Glette, K.; Laviolette, F.; Gosselin, B. Deep learning for electromyographic hand gesture signal classification using transfer learning. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 760–771. [Google Scholar] [CrossRef] [Green Version]
  126. Park, K.H.; Suk, H.I.; Lee, S.W. Position-independent decoding of movement intention for proportional myoelectric interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 24, 928–939. [Google Scholar] [CrossRef]
  127. Matsubara, T.; Morimoto, J. Bilinear modeling of EMG signals to extract user-independent features for multiuser myoelectric interface. IEEE Trans. Biomed. Eng. 2013, 60, 2205–2213. [Google Scholar] [CrossRef] [PubMed]
  128. Xiong, A.; Zhao, X.; Han, J.; Liu, G.; Ding, Q. An user-independent gesture recognition method based on sEMG decomposition. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–3 October 2015; pp. 4185–4190. [Google Scholar]
  129. Yang, D.; Gu, Y.; Jiang, L.; Osborn, L.; Liu, H. Dynamic training protocol improves the robustness of PR-based myoelectric control. Biomed. Signal Process. Control 2017, 31, 249–256. [Google Scholar] [CrossRef]
  130. Kristoffersen, M.B.; Franzke, A.W.; Van Der Sluis, C.K.; Bongers, R.M.; Murgia, A. Should hands be restricted when measuring able-bodied participants to evaluate machine learning controlled prosthetic hands? IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 1977–1983. [Google Scholar] [CrossRef] [PubMed]
  131. Engdahl, S.M.; Acuña, S.A.; King, E.L.; Bashatah, A.; Sikdar, S. First demonstration of functional task performance using a sonomyographic prosthesis: A case study. Front. Bioeng. Biotechnol. 2022, 10, 876836. [Google Scholar] [CrossRef]
  132. Wang, J.; Bi, L.; Fei, W.; Tian, K. EEG-Based Continuous Hand Movement Decoding Using Improved Center-Out Paradigm. IEEE Trans. Neural Syst. Rehabil. Eng. 2022, 30, 2845–2855. [Google Scholar] [CrossRef]
Figure 1. Basic concepts of myoelectric control.
Figure 1. Basic concepts of myoelectric control.
Biomimetics 08 00328 g001
Figure 2. Application examples of decoding models: (a) Example of musculoskeletal model applications [23,60,61,62]. (b) Examples of traditional machine learning model applications. (c) Example of CNN-based model applications. [46,49]. (d) Example of RNN-based model applications [37]. (e) Example of hybrid-structured model applications [40,41].
Figure 2. Application examples of decoding models: (a) Example of musculoskeletal model applications [23,60,61,62]. (b) Examples of traditional machine learning model applications. (c) Example of CNN-based model applications. [46,49]. (d) Example of RNN-based model applications [37]. (e) Example of hybrid-structured model applications [40,41].
Biomimetics 08 00328 g002
Figure 3. Control strategy categories. (a) Unidirectional control strategy. (b) Feedback control strategy. (c) Shared control strategy.
Figure 3. Control strategy categories. (a) Unidirectional control strategy. (b) Feedback control strategy. (c) Shared control strategy.
Biomimetics 08 00328 g003
Table 1. Typical application examples of each module.
Table 1. Typical application examples of each module.
ModuleTypesReference
Feature engineeringSingle time-domain featureMAV [12,13,14], RMS [15,16,17]
Combined time-domain featuresRMS+WL+ZC [18],
ZOM+SOM+FOM+PS+SE+USTD [19],
MAV+WL+ZC+SSC+SOAMC [20]
Decoding modelMusculoskeletal modelHill-type muscle model [7,21,22],
Mykin model [23,24],
Lumped-parameter model [25,26]
Traditional machine learning modelGaussian processes [12,27,28],
NMF [15,29,30], Linear
regression [31,32]
Deep learning modelCNN-based model [33,34,35,36], RNN-based
model [37,38,39], Hybrid-structured
model [40,41,42]
Mapping parametersKinematic parametersJoint angle [12,17,43], Joint angular
velocity [28,44], Joint angular
acceleration [39,45]
Dynamics parametersJoint torque [35,46,47,48]
Other parameters3D coordinate value [49,50], Movement
of the in-hand object [51],
Multidimensional arrays [14],
Movement activation level [52]
MAV: Mean absolute value, RMS: Root mean square, WL: Waveform length, ZC: Zero crossing, SSC: Slope-sign changes, SOAMC: Sixth order autoregressive model coefficients, ZOM: Zero order moment, SOM: Second order moment, FOM: Fourth order moment, PS: Peak stress, SE: Shake expectation, USTD: Unbiased standard deviation, NMF: Non-negative matrix factorization.
Table 2. Summary of current application scenarios for prosthetic hand manipulation.
Table 2. Summary of current application scenarios for prosthetic hand manipulation.
Manipulation ScenariosTasksReference
Grasp test[81,110,111,112]
Box-and-blocks test[32,107,113]
Simple manipulation tasksRelocation test[32,102,103]
Pouring or drinking[20,49,107]
Screwing test[49,108]
Block building[49]
Complex manipulation tasksSqueezing toothpaste[20]
Bimanual interaction[114]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Z.; Min, H.; Wang, D.; Xia, Z.; Sun, F.; Fang, B. A Review of Myoelectric Control for Prosthetic Hand Manipulation. Biomimetics 2023, 8, 328. https://doi.org/10.3390/biomimetics8030328

AMA Style

Chen Z, Min H, Wang D, Xia Z, Sun F, Fang B. A Review of Myoelectric Control for Prosthetic Hand Manipulation. Biomimetics. 2023; 8(3):328. https://doi.org/10.3390/biomimetics8030328

Chicago/Turabian Style

Chen, Ziming, Huasong Min, Dong Wang, Ziwei Xia, Fuchun Sun, and Bin Fang. 2023. "A Review of Myoelectric Control for Prosthetic Hand Manipulation" Biomimetics 8, no. 3: 328. https://doi.org/10.3390/biomimetics8030328

Article Metrics

Back to TopTop