Next Article in Journal
Dynamic Occupancy Grid Map with Semantic Information Using Deep Learning-Based BEVFusion Method with Camera and LiDAR Fusion
Previous Article in Journal
3D-Printed Conformal Meta-Lens with Multiple Beam-Shaping Functionalities for Mm-Wave Sensing Applications
Previous Article in Special Issue
sEMG-Based Robust Recognition of Grasping Postures with a Machine Learning Approach for Low-Cost Hand Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping Method of Human Arm Motion Based on Surface Electromyography Signals

1
School of Mechanical and Energy Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China
2
Key Laboratory of Special Purpose Equipment and Advanced Processing Technology, Ministry of Education and Zhejiang Province, Zhejiang University of Technology, Hangzhou 310023, China
3
College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(9), 2827; https://doi.org/10.3390/s24092827
Submission received: 29 March 2024 / Revised: 19 April 2024 / Accepted: 24 April 2024 / Published: 29 April 2024

Abstract

:
This paper investigates a method for precise mapping of human arm movements using sEMG signals. A multi-channel approach captures the sEMG signals, which, combined with the accurately calculated joint angles from an Inertial Measurement Unit, allows for action recognition and mapping through deep learning algorithms. Firstly, signal acquisition and processing were carried out, which involved acquiring data from various movements (hand gestures, single-degree-of-freedom joint movements, and continuous joint actions) and sensor placement. Then, interference signals were filtered out through filters, and the signals were preprocessed using normalization and moving averages to obtain sEMG signals with obvious features. Additionally, this paper constructs a hybrid network model, combining Convolutional Neural Networks and Artificial Neural Networks, and employs a multi-feature fusion algorithm to enhance the accuracy of gesture recognition. Furthermore, a nonlinear fitting between sEMG signals and joint angles was established based on a backpropagation neural network, incorporating momentum term and adaptive learning rate adjustments. Finally, based on the gesture recognition and joint angle prediction model, prosthetic arm control experiments were conducted, achieving highly accurate arm movement prediction and execution. This paper not only validates the potential application of sEMG signals in the precise control of robotic arms but also lays a solid foundation for the development of more intuitive and responsive prostheses and assistive devices.

1. Introduction

As society progresses, the demand for human–computer interaction control becomes more and more diversified, complex, and high-precision. Assistive exoskeletons, artificial neural prostheses, and rehabilitative orthotic training devices, among other rehabilitation equipment, are all realized through human–computer interaction control technology [1,2]. Biometric signals contain rich information on limb movements, by analyzing these signals for human movement intention recognition, a more real-time and coordinated human–computer interaction control can be achieved [3]. Specifically, human–computer interaction methods based on sEMG signals are advanced and unaffected by limb integrity, making them suitable not only for general users but also for meeting the human–computer interaction needs of people with disabilities [4].
sEMG signals were first applied in gesture recognition. Zhang, W.L. et al. [5] analyzed time domain features, selecting five feature values as input, and employed an artificial neural network to recognize electromyogram data collected from the upper limb. This method successfully classified eight different hand gestures, achieving a final recognition rate of 89.31%. Zhao, S.H. et al. processed eight-channel sEMG signals using a moving average and segmented the processed results to extract their time-domain features, then recognized four types of hand gestures by using a dynamic time warping algorithm, with a successful recognition rate of up to 94.7% [6]. Kang, S.Y. et al. collected sEMG signals from the arm and improved the online recognition accuracy of hand gestures using deep learning methods, achieving a recognition rate of 90% with network data testing [7]. As the number of recognized hand gestures increases, the recognition accuracy tends to decrease, mainly due to the selection of rather limited features. Therefore, expanding the range of feature selection and using deep learning to replace traditional classification methods, such as Support Vector Machines (SVM) or neural networks, can significantly enhance the recognition rate [8,9]. These studies highlight the potential of deep learning in improving recognition accuracy and the need to expand feature selection to accommodate complex gesture recognition.
Although sEMG signals can classify and recognize discrete motion patterns such as clenched fists and extended palms, they are unable to continuously estimate human motion intention. Continuous estimation of the amount of human joint motion is key to achieving flexible and coordinated movement between rehabilitation motion assistance devices and the human body. Jiang et al. [10] established a Hill-type muscle model [11,12] and a joint geometric parameter model, utilizing sEMG signals to estimate joint torque, which enables continuous estimation of single-joint movement. However, human skeletal muscle parameters are difficult to measure, and model complexity limits their application to situations with few degrees of freedom. Liu, Q. et al. [13] used principal component analysis to extract the principal components of multi-channel sEMG signals. They established a high-order polynomial model from principal components to upper limb joint angles, achieving estimations of joint angles. Still, this approach cannot explain the influence of each sEMG channel on joint angle estimations [14,15]. For patients with limb movement disorders, as well as factors like actual motion interference and sweating, it becomes impossible to monitor sEMG signals directly related to joint movement, and usually, only the sEMG signals from remaining normal functioning muscles can be utilized [16,17].
Based on the aforementioned considerations, this paper focuses on the research of the sEMG signal-joint angle model and processes feature extraction of sEMG signals. Utilizing a BP neural network, we established a nonlinear relationship between sEMG signals and joint angles. Next, we predicted the individual degree of freedom motions and combined motions for the shoulder and elbow joints. Finally, we conducted a humanoid robotic arm control experiment, which successfully achieved the prediction and execution of arm movements.

2. Signal Acquisition Strategy

2.1. The Placement of sEMG Acquisition Device

By analyzing the physiological structure and movement patterns of the human arm, seven muscles that are most directly related to the movements of the shoulder joint, wrist joint, and forearm were selected as the subjects for sEMG signals collection. These muscles were the trapezius, long head muscle, the biceps brachii, the triceps brachii, and the anterior, middle, and posterior parts of the deltoid muscle. Furthermore, the back of the hand and the elbow joint, two body parts with no prominent electromyographic signals, were chosen as the sites for the reference electrodes. The arm electromyography signal acquisition device adopts the BClduino electromyography device [18], accompanied by the EMGduino electromyography amplifier lead wires. The electrode patches are affixed to the middle part of each muscle, and the patch position is shown in Figure 1.
On the other hand, as for manual actions, only a few predetermined hand gestures are recognized, such as clenching fingers and flipping the palm up and down. According to the plan, these hand gestures are designed to coordinate the grasping function of a humanoid robotic arm. This is mainly due to the wide distribution of muscle groups that control the hand (mainly the fingers), making them difficult to collect [19,20,21]. Therefore, it is challenging to achieve mapping of fine hand movements based on surface sEMG signals.
The gesture acquisition device used is the MYO armband, which is worn just above the wrist, near the middle of the forearm. The muscle activity in this area can control most hand gestures. The wearing position of the MYO armband is shown in Figure 2.

2.2. The Placement of Joint Angle Sensors

Using IMU sensors to calculate the motion angles of each joint of the human arm is analogous to solving the inverse kinematics solution of a series-parallel hybrid robotic arm [22,23,24]. We employed a total of three IMU sensors (designated as IMU-Base, IMU-Shoulder, and IMU-Elbow). The IMU-Base is worn at the chest position where there is no limb movement, serving to calibrate the zero positions for the IMU-Shoulder and IMU-Elbow sensors. The IMU-Shoulder is worn on the upper arm to calculate the angular changes of the three degrees of freedom at the shoulder joint. The IMU-Elbow is worn on the forearm to measure the angular changes of the two degrees of freedom at the elbow joint with the forearm. The wearing positions of the IMU sensors are illustrated in Figure 3.

2.3. The Setting of Arm Movements

2.3.1. Gesture Actions

The first gesture is clenching the palm into a fist, which corresponds to the command for the robotic claw to close in order to grasp an object. Conversely, relaxing the fist corresponds to the command for the robotic claw to open, as shown in Figure 2a. Secondly, flipping the palm upward has been designated as the emergency stop gesture, considering that there may be dangerous situations during the movement of the mechanical arm, to ensure the safety of the robot arm operation, as shown in Figure 2b. Moreover, after ensuring the elimination of danger and the safety of the operational area, flipping the palm downward has been set as the command to restart the robotic arm, allowing it to resume movement, as displayed in Figure 2c. The collected signals consist of two parts: one is the training set used for algorithm training, and the other is the test set used to assess the algorithm’s recognition performance. The flow of a single group action is presented in Figure 4 (taking the fist-clenching action as an example).
The specific data collection experimental steps are as follows:
  • The participant maintains a comfortable sitting position, with the left arm relaxed and hanging by the side of the body and the right forearm bent and placed on the table in a relaxed state. The right hand performs a gesture and holds it for 3 s. After completing a full gesture, the hand returns to the original state and relaxes for 5 s.
  • Each set of movements is performed five times consecutively. Each action should strive to exert the same force as the previous action. After completing a set of movements, stop collecting the EMG signal, rest for 2–3 min, and once ready, start the next type of movement. Continue collecting data sequentially until the entire dataset has been gathered.
  • The data collection process for the test set and training set is the same, and the data for the training set and test set are collected once in the same order and manner.

2.3.2. Single-Degree-of-Freedom Joint Movement

Based on the range of human arm joint movement, we established the range of motion for a single-degree-of-freedom joint in our data collection experiment. Each range of motion for a single-degree-of-freedom is divided into five groups, as shown in Table 1.
Before the formal start of the experiment, the participant should be acquainted with the entire experimental procedure and safety precautions. This means that the participants should practice the experimental movements without wearing sEMG signal acquisition devices and IMU sensors to ensure the overall success of the experiment. Taking the collection of a single set of motion signals for elbow joint flexing at 90° as an example, as shown in Figure 5.
The specific collection experimental steps are as follows:
  • The participant maintains a standing posture, arms relaxed and hanging vertically at the sides of the body. Starting from the elbow joint as the pivot point, the arms are steadily extended outwards away from the body, ensuring minimal additional movements so that the angle between the forearm and upper arm is approximately 90°. This position is held for 3 s before returning the arms to the initial position (relaxing for 5 s). This step is repeated consecutively five times, with each repetition striving to exert a similar force as the previous one.
  • For the same single-degree-of-freedom motion of a joint, perform other groups of actions in the order described above. After completing a complete joint single-degree-of-freedom movement, stop the collection of sEMG signals and rest for 2–3 min. Ensure good condition before starting the next single-degree-of-freedom movement of the joint. Collect in sequence until all single-degree-of-freedom movement data of all joints are collected.
  • The data collection process for the test set and training set is the same, and the data for the training set and test set are collected once in the same order and manner.

2.3.3. Coordinated Motion of Multiple Joints

Both of the above experiments are focused on collecting data from individual joint movements with a single degree of freedom. However, in actual human arm movements, there is a combination of motion of multiple joints. To verify whether there are differences in sEMG signals between joint single-degree-of-freedom motions and multi-joint collaborative motions, two combination movements of abduction and raising of the arm and cross arm with the palm clenched were designed. The outward hand lift movement includes shoulder joint abduction/adduction, elbow joint flexion/extension, and forearm rotation, as shown in Figure 6. The cross arm with palm clenched movement involves shoulder flexion/extension, elbow flexion/extension, forearm rotation, and hand clenching movements, as illustrated in Figure 7. The specific experimental procedures and considerations are consistent with the joint single-degree-of-freedom motion acquisition experiment.

3. Gesture Action Recognition

3.1. sEMG Signal Processing

By amplifying, filtering [25], and other operations to remove interference from other noises in the sEMG signal, we then extracted the characteristics from the filtered sEMG signal. We used normalization methods [26] to eliminate individual differences between sEMG signals. At the same time, we used the Moving Average Method (MAF) to reduce the impact of short-term random fluctuations on the overall trend of the sEMG signal. Finally, we calculated the mean and variance of the signal after applying the MAF to obtain sEMG signals with distinct features, laying the foundation for subsequent gesture recognition [27].
The unprocessed original sEMG signal is shown in Figure 8. A 200 Hz low-pass filter is used to filter out high-frequency interference signals, followed by a 20 Hz low-pass filter to remove interference caused by motion wake. Finally, a 50 Hz notch filter is used to remove power frequency interference. The filtered electromyography signal is shown in Figure 9.
The time-domain characteristics are analyzed based on the amplitude of the time series of the sEMG signals, which can effectively reflect muscle strength and frequency. The time-domain features of the MAV and RMS values of the sEMG signals were extracted, as shown in Figure 10.
MAV = 1 N i = 1 N x i , RMS = 1 N i = 1 N x i 2 .
where N is the total number of single channel samples, and xi is the numerical value at sampling point i .
To eliminate the influence of individual differences caused by multiple acquisitions, it is necessary to normalize the sEMG signals. Select the Interval Scaling Method for normalization, which uses the maximum and minimum feature values to normalize the sEMG signals. After processing, the extracted feature values can be normalized to the (0, 1) range, thus removing differences between the data. The normalized result is shown in Figure 11, and its calculation formula can be expressed as:
x = x i x min x max x min .
where xi is the origin feature; xmin is the minimum eigenvalue; xmax is the maximum eigenvalue; x′ is the new eigenvalue.

3.2. Establishment of Hybrid Network Model

Based on the 8 × 200 matrix (channel × time) obtained from the MYO armband, a ConvNet architecture is implemented using PyTorch 2.0. The non-linear activation function used is the ReLU function. After meticulous optimization to achieve a balanced convergence speed and the capacity to accurately identify an optimal minimum, thereby boosting the training process’s efficiency and effectiveness, the learning rate for the ConvNet [28] is set to 0.00681292, and the dropout rate is set at 0.5, with a batch size of 128. The specific network architecture is shown in Figure 12.
Due to the fact that different channels of sEMG data correspond to different positions on the arm, convolution and pooling operations on the sEMG images are performed only in the length direction of the image, which allows extraction of temporal features from the signals and ensures the independence of data between different channels. After performing the three convolution and two pooling operations on the original data, the resulting data has dimensions of (64, 128, 5, 8). Flattening the data into vector form yields data with dimensions of (64, 5120).
On the other hand, since the characteristics of sEMG signals are also contained in the frequency domain and time-frequency domain, extracting frequency domain features after the Fourier transform of the original sEMG signal. Since the RMS value of the amplitude of the electromyography signal can best reflect the degree of muscle activity corresponding to the electromyography signal, the authors also manually extracted the RMS feature value of the sEMG signal. Then, the manually extracted feature vectors and the feature vectors obtained by the convolutional neural network are concatenated as inputs to the fully connected network. After calculation by two layers of fully connected networks, the score vectors for each gesture category are finally obtained.

3.3. Analysis of Gesture Recognition Results

The confusion matrix being trained by the CNN–ANN Hybrid Network Model [29,30,31] with multi-feature fusion is shown in Figure 13.
According to the confusion matrix, it is evident that clenching and relaxing movements are relatively easy to recognize, and the recognition rates of both movements are above 95%. Although the accuracy of distinguishing between palm-up and palm-down movements is not high, the recognition accuracy can still reach over 90%, and the overall average recognition rate is 94.50%. Actual analysis also finds that there are obvious changes in the muscle movements used during clenching and relaxation states, and the muscle groups that control movement are not closely connected. Conversely, the muscles involved in palm-up and palm-down flipping exhibit closer connections, which potentially cause interference in sEMG signal collection. Therefore, when designing palm gestures for sEMG, selecting gestures with significant differentiation in muscle activity positions and ample spacing between muscle activities can improve the performance of sEMG gesture recognition systems.

4. Arm Action Recognition

4.1. Establishment of BP Neural Network Model

The focus of the arm movement prediction model is to establish a regression model that takes input from electromyography signals and arm joint movement angles to predict the various arm movement angles. In supervised learning utilizing BP Neural Network, the initial phase encompasses the forward propagation of input data, progressing from the input layer to the output layer in a sequential manner. Following this, the discrepancy between the actual output results and the arm joint movement angles is backpropagated from the output layer all the way to the input layer. Adjusting the weights and the thresholds in the network parameters to a certain extent allows the error function (E) to decrease in a negative gradient mode, thereby achieving optimal results [32].
A three-layer BP Neural Network algorithm is selected to establish an angle prediction model, as depicted in Figure 14. The input layer, serving as the first layer of the network, utilizes a total of eight nodes, signifying that there are eight input variables. These variables represent the sEMG signals from 1 to 7 (corresponding to the signals from seven muscles) and the motion time. Subsequently, the middle layer comprises one hidden layer, for which the number of nodes, determined to be 20 through a trial method, is established through continuous network training. The final layer is the output layer, representing the estimation of joint motion angle data.
In practical applications, the BP algorithm suffers from slow convergence and the presence of local minima in the objective function, therefore necessitating some modifications to the BP algorithm [33,34].
  • Incorporate momentum term into the BP Neural Network.
The addition of a momentum term is aimed at the backpropagation process, where a quantity proportional to the previous weight change is added to each weight adjustment, thereby generating a new weight adjustment. Its mathematical expression can be defined as follows:
Δ w y ( k + 1 ) = ( 1 m c ) η δ i p j + m c Δ w y ( k ) ,
Δ b i ( k + 1 ) = ( 1 m c ) η δ i + m c Δ b i ( k ) .
where k is the number of training iterations, and mc is the momentum factor.
By adding momentum terms during network training, small adjustments can be made to the weight correction while also avoiding entering local minima during the learning process.
2.
Modify adaptive learning rate.
The criterion for adaptive adjustment of the learning rate is to compare the change in the error function from each iteration. If the error function decreases compared to the previous iteration, then increase the learning rate. If the error function does not decrease and the adjustment is too large, then reduce the learning rate. Its mathematical expression is as follows:
1.05 η ( k ) S S E ( k + 1 ) < S S E ( k )   0.7 η ( k ) S S E ( k + 1 ) > 1.04 S S E ( k )       η ( k ) o t h e r s .
where SSE is the sum of squared errors.

4.2. Analysis of Joint Single-Degree-of-Freedom Test Results

The collected sEMG signal dataset from each joint’s single-degree-of-freedom motion is used to train the neural network. Figure 15 shows the error curve of the network training. Mean-squared error (MSE) represents the sum of squares of errors between the actual output values and the target output values:
MSE = 1 n i = 1 n w i ( Y i Y ^ i ) 2 .
where n is the number of data; Yi is the actual value; Y ^ i is the predicted value.
Then, input the sEMG signal data from each action test set into the trained neural network for joint angle prediction. The target angle and predicted angle curves obtained are shown in Figure 16.
As can be seen from the figure, the trends of the two curves in the figure are almost identical. The larger errors occur at the initial moment of the single-joint motion range, which is mainly due to the time difference in the normalization processing of the electromyographic signals. Therefore, based on the constructed neural network model, experiments on the estimation of joint angles during continuous motion can be conducted.

4.3. Analysis of Continuous Motions Test Results

For the combined action of abduction and raising of the arm, the gesture recognition results obtained after processing the collected data are shown in Table 2, with the predictions of joint angles illustrated in Figure 17. Figure 17a–c depict the actual and predicted angle curves for shoulder abduction/adduction, elbow flexion/extension, and forearm rotation, respectively. Meanwhile, during the motion, the shoulder joint did not undergo flexion/extension or rotation movements. Figure 17d shows the angle prediction curves for these two degrees of freedom of the shoulder joint where no action occurred.
For the combined action of the cross arm with palm clenched, the gesture recognition results obtained after processing the collected data are shown in Table 3, with the predictions of joint angles presented in Figure 18. Figure 18a–c, respectively, display the actual and predicted angle curves for shoulder flexion/extension, elbow flexion/extension, and forearm rotation. Figure 18d shows the predicted angle curves for the two unused degrees of freedom during this action: shoulder abduction/adduction and rotation.
Although the difficulty of recognizing multiple actions performed simultaneously is greater than that of single-joint movements, based on the experimental data above, it can be determined that when movements involving three degrees of freedom occur simultaneously in various joints of the arm, the predicted joint angles tend to have a slightly larger error compared to single-joint movements. However, the overall trend of the target and predicted angle curves is similar, and the curves exhibit a high degree of fit. The maximum error does not exceed 5°. Despite a certain impact on the recognition rate, it does not prevent the action from being recognized.
Also, degrees of freedom that do not undergo movement can be accurately identified. Figure 17d and Figure 18d clearly demonstrate that the predicted angles consistently remain between 1° and 2°, which can be determined as no action occurring in those degrees of freedom. Compared to joint angle prediction, gesture action recognition does not decrease in accuracy due to multiple actions being performed simultaneously, and it still maintains a high recognition rate. Compared to existing research, the human motion mapping method established in this paper based on sEMG signals can better predict the joint angles of continuous movements and provides a new research approach for predicting the joint angles of combined movements.

5. Human–Machine Action Mapping Experiment

The experimental platform constructed [22,23,24] is shown in Figure 19, and the designed test system is illustrated in Figure 20. An operator wears an sEMG signal acquisition device and performs actions, which are captured as raw EMG signals and transmitted to the upper computer software in Visual Studio. Through a constructed action recognition program, the raw sEMG signals undergo a series of data processing and recognition. Ultimately, the recognized human limb movements are converted into control commands. These commands are then sent to the control system of the robotic arm. Upon receiving the commands, the control system transmits motion parameters to the robotic arm, which then completes the corresponding limb movements.
The task process is divided into the following four parts:
Step A: Control the elbow and shoulder joints to bend forward, allowing the robotic claw to reach the position of the water bottle.
Step B: Command the robotic claw to close and grasp the water bottle.
Step C: Control the forearm to rotate, tilting the water bottle so water pours out from its mouth.
Step D: Control the forearm to rotate back to its initial position and command the robotic claw to release the water cup. Finally, control the elbow and shoulder joints to return to their initial positions.
These experimental actions are carried out a total of 20 times, and the success rate is determined by the number of completions. The experimental process is shown in Figure 21, and the experimental results are presented in Table 4.
The overall motion process consists of two parts: performing the water-pouring task and returning to the initial state. The critical actions within this process are grasping the target water bottle and rotating the arm to achieve the goal of tilting the bottle to pour water. The entire experiment proceeded smoothly, successfully completing an experiment in human–machine interaction that involved recognizing human movements through sEMG signals and mapping those movements onto a humanoid robotic arm.

6. Conclusions

This paper explores the recognition of arm movements, including shoulder abduction and adduction, flexion and extension, rotation, elbow flexion and extension, forearm rotation, and four hand gestures, through an sEMG signal-joint angle model. The study establishes a mapping model between sEMG signals and movement angles by extracting features and employing a backpropagation neural network, resulting in high-accuracy predictions of joint single-degree-of-freedom movements and combined movements’ angles. Additionally, our work accomplished the complete action of pouring water by mapping actions to a robotic arm using sEMG signals, demonstrating the feasibility of applying sEMG signals to robotic arm control. The results indicate that the sEMG signal-joint angle mapping model is an efficient motion recognition algorithm and plays an important role in human–computer interaction. However, it must be noted that human limb activities are diverse, and muscular conditions vary, presenting numerous problems and challenges that need to be addressed in practice. Furthermore, with the development of deep learning models, considering the use of transformer-based pre-trained models through practical transfer learning algorithms to reduce training iterations is a research direction worth exploring.

Author Contributions

Conceptualization, Y.Z. and P.S.; methodology, Y.Z. and B.Z.; software, Y.Z. and G.Z.; validation, G.Z. and H.Z.; formal analysis, Y.Z. and H.Z.; resources, Y.Z. and P.S.; data curation, G.Z. and H.Z.; writing—original draft preparation, Y.Z. and G.Z.; writing—review and editing, Y.Z., H.Z. and P.S.; supervision, P.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Zhejiang Province (Grant No. LTGY24E050002), the National Natural Science Foundation of China (Grant Nos. 52105037, U21A20122), and the Key Laboratory of E&M of the Ministry of Education and Zhejiang University of Technology (Grant No. EM2021120104).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Medical Ethics Committee of Zhejiang Academy of Traditional Chinese Medicine (No. KTSC2023084), Hangzhou, Zhejiang.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Feng, M.; Meunier, J. Skeleton Graph-Neural-Network-Based Human Action Recognition: A Survey. Sensors 2022, 22, 2091. [Google Scholar] [CrossRef] [PubMed]
  2. Ma, N.; Wu, Z.X.; Cheung, Y.-M.; Guo, Y.; Gao, Y.; Li, J.; Jiang, B. A Survey of Human Action Recognition and Posture Prediction. Tsinghua Sci. Technol. 2022, 27, 973–1001. [Google Scholar] [CrossRef]
  3. Wei, W.; Tan, F.; Zhang, H.; Mao, H.; Fu, M.; Samuel, O.W.; Li, G. Surface Electromyogram, Kinematic, and Kinetic Dataset of Lower Limb Walking for Movement Intent Recognition. Sci. Data 2023, 10, 358. [Google Scholar] [CrossRef] [PubMed]
  4. Tan, P.C.; Han, X.; Zou, Y.; Qu, X.; Xue, J.; Li, T.; Wang, Y.; Luo, R.; Cui, X.; Xi, Y.; et al. Self-Powered Gesture Recognition Wristband Enabled by Machine Learning for Full Keyboard and Multicommand Input. Adv. Mater. 2022, 34, 21. [Google Scholar] [CrossRef]
  5. Zhang, W.L.; Wang, Y.L.; Zhang, J.; Pang, G. EMG-FRNet: A feature reconstruction network for EMG irrelevant gesture recognition. Biosci. Trends 2023, 17, 219–229. [Google Scholar] [CrossRef] [PubMed]
  6. Zhao, S.H.; Zhou, J.H.; Fu, Y.F. Investigation of gesture recognition using attention mechanism CNN combined electromyography feature matrix. J. Electron. Meas. Instrum. 2023, 37, 59–67. [Google Scholar]
  7. Kang, S.Y.; Kim, H.; Park, C.; Sim, Y.; Lee, S.; Jung, Y. sEMG-Based Hand Gesture Recognition Using Binarized Neural Network. Sensors 2023, 23, 1436. [Google Scholar] [CrossRef]
  8. Shen, S.; Li, M.; Mao, F.; Chen, X.; Ran, R. Gesture Recognition Using MLP-Mixer With CNN and Stacking Ensemble for sEMG Signals. IEEE Sens. J. 2024, 24, 4960–4968. [Google Scholar] [CrossRef]
  9. He, J.Y.; Niu, X.Y.; Zhao, P.H.; Lin, C.; Jiang, N. From Forearm to Wrist: Deep Learning for Surface Electromyography-Based Gesture Recognition. IEEE Trans. Neural Syst. Rehabil. Eng. 2024, 32, 102–111. [Google Scholar] [CrossRef]
  10. Jiang, Y.J.; Song, L.; Zhang, J.M.; Song, Y.; Yan, M. Multi-Category Gesture Recognition Modeling Based on sEMG and IMU Signals. Sensors 2022, 22, 5855. [Google Scholar] [CrossRef]
  11. Couraud, M.; Cattaert, D.; Paclet, F.; Oudeyer, P.Y.; De Rugy, A. Model and experiments to optimize co-adaptation in a simplified myoelectric control system. J. Neural Eng. 2018, 15, 026006. [Google Scholar] [CrossRef] [PubMed]
  12. Naik, G.R.; Nguyen, H.T. Nonnegative matrix factorization for the identification of EMG finger movements: Evaluation using matrix analysis. IEEE J. Biomed. Health Inf. 2019, 19, 478–485. [Google Scholar] [CrossRef] [PubMed]
  13. Liu, Q.; Ma, L.; Ai, Q.; Chen, K.; Meng, W. Knee Joint Angle Prediction Based on Muscle Synergy Theory and Generalized Regression Neural Network. In Proceedings of the 2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Auckland, New Zealand, 9–12 July 2018; pp. 28–32. [Google Scholar]
  14. Zhang, X.; Chen, X.; Li, Y.; Lantz, V.; Wang, K.; Yang, J. A Framework for Hand Gesture Recognition Based on Accelerometer and EMG Sensors. IEEE Trans. Syst. Man Cybern. Part A-Syst. Hum. 2011, 41, 1064–1076. [Google Scholar] [CrossRef]
  15. Wang, Q.; Chen, X.; Chen, R.Z.; Chen, Y.; Zhang, X. Electromyography-Based Locomotion Pattern Recognition and Personal Positioning Toward Improved Context-Awareness Applications. IEEE Trans. Syst. Man Cybern. Syst. 2013, 43, 1216–1227. [Google Scholar] [CrossRef]
  16. Tchantchane, R.; Zhou, H.; Zhang, S.; Alci, G. A Review of Hand Gesture Recognition Systems Based on Noninvasive Wearable Sensors. Adv. Intell. Syst. 2023, 5, 2300207. [Google Scholar] [CrossRef]
  17. Yao, P.; Wang, K.F.; Xia, W.W.; Guo, Y.; Liu, T.; Han, M.; Gou, G.; Liu, C.; Xue, N. Effects of Training and Calibration Data on Surface Electromyogram-Based Recognition for Upper Limb Amputees. Sensors 2024, 24, 920. [Google Scholar] [CrossRef] [PubMed]
  18. Garouche, M.; Thamsuwan, O. Development of a Low-Cost Portable EMG for Measuring the Muscular Activity of Workers in the Field. Sensors 2023, 23, 7873. [Google Scholar] [CrossRef] [PubMed]
  19. Ao, J.X.; Liang, S.L.; Yan, T.; Hou, R.; Zheng, Z.; Ryu, J.S. Overcoming the effect of muscle fatigue on gesture recognition based on sEMG via generative adversarial networks. Expert Syst. Appl. 2023, 238, 122304. [Google Scholar] [CrossRef]
  20. Geng, W.D.; Du, Y.; Jin, W.G.; Wei, W.; Hu, Y.; Li, J. Gesture recognition by instantaneous surface EMG images. Sci. Rep. 2016, 6, 36571. [Google Scholar] [CrossRef]
  21. Fu, R.R.; Zhang, B.Z.; Liang, H.F.; Wang, S.; Wang, Y.; Li, Z. Gesture recognition of sEMG signal based on GASF-LDA feature enhancement and adaptive ABC optimized SVM. Biomed. Signal Process. Control 2023, 85, 105104. [Google Scholar] [CrossRef]
  22. Sun, P.; Li, Y.B.; Wang, Z.S.; Chen, K.; Chen, B.; Zeng, X.; Zhao, J.; Yue, Y. Inverse displacement analysis of a novel hybrid humanoid robotic arm. Mech. Mach. Theory 2020, 147, 103743. [Google Scholar] [CrossRef]
  23. Sun, P.; Li, Y.B.; Shuai, K.; Yue, Y.; Wei, B. Workspace optimization of a humanoid robotic arm based on the multi-parameter plane model. Robotica 2022, 40, 3088–3103. [Google Scholar] [CrossRef]
  24. Sun, P.; Li, Y.B.; Chen, K.; Zhu, W.; Zhong, Q.; Chen, B. Generalized kinematics analysis of hybrid mechanisms based on screw theory and lie groups lie algebras. Chin. J. Mech. Eng. 2021, 34, 98. [Google Scholar] [CrossRef]
  25. Luca, M. Nonlinear spatio-temporal filter to reduce crosstalk in bipolar electromyogram. J. Neural Eng. 2024, 24, 016021. [Google Scholar]
  26. Zhu, X.X.; Pang, Y.Y.; Li, L.; Sun, W.; Ding, L.; Song, Q.; Shen, P. Standard isometric contraction has higher reliability than maximum voluntary isometric contraction for normalizing electromyography during level walking among older adults with knee osteoarthritis. Front. Bioeng. Biotechnol. 2024, 12, 1276793. [Google Scholar] [CrossRef] [PubMed]
  27. Niu, Q.F.; Shi, L.; Niu, Y.; Jia, K.; Fan, G.; Gui, R.; Wang, L. Motion intention recognition of the affected hand based on the sEMG and improved DenseNet network. Heliyon 2024, 10, e26763. [Google Scholar] [CrossRef] [PubMed]
  28. Romera, E.; Alvarez, J.M.; Bergasa, L.M.; Arryo, R. ERFNet: Efficient Residual Factorized ConvNet for Real-Time Semantic Segmentation. IEEE Trans. Intell. Transp. Syst. 2018, 19, 263–272. [Google Scholar] [CrossRef]
  29. O’Shea, K.; Nash, R. An Introduction to Convolutional Neural Networks. arXiv 2015, arXiv:1511.08458. [Google Scholar]
  30. Nasr, A.; Bell, S.; He, J.Y.; Whittaker, R.L.; Jiang, N.; Dickerson, C.R.; McPhee, J. MuscleNET: Mapping electromyography to kinematic and dynamic biomechanical variables by machine learning. J. Neural Eng. 2021, 18, 0460d3. [Google Scholar] [CrossRef]
  31. Suganthi, J.R.; Rajeswari, K. Pattern recognition for EMG based forearm orientation and contraction in myoelectric prosthetic hand. J. Intell. Fuzzy Syst. 2024, 46, 7047–7059. [Google Scholar] [CrossRef]
  32. Kakoty, N.M.; Hazarika, S.M.; Gan, J.Q. EMG Feature Set Selection Through Linear Relationship for Grasp Recognition. J. Med. Biol. Eng. 2019, 36, 883–890. [Google Scholar] [CrossRef]
  33. Li, K.S.; Li, K.; Zhang, W.S. PCA face recognition algorithm based on improved BP neural network. Comput. Appl. Softw. 2014, 31, 158–161. [Google Scholar]
  34. Xu, Y.; He, M. Improved artificial neural network based on intelligent optimization algorithm. Neural Netw. World 2018, 28, 345–360. [Google Scholar] [CrossRef]
Figure 1. Distribution of sEMG signal acquisition points.
Figure 1. Distribution of sEMG signal acquisition points.
Sensors 24 02827 g001
Figure 2. The MYO bracelet wearing position and schematic diagram of gesture action: (a) Clenching the palm. (b) Flipping up the palm. (c) Flipping down the palm.
Figure 2. The MYO bracelet wearing position and schematic diagram of gesture action: (a) Clenching the palm. (b) Flipping up the palm. (c) Flipping down the palm.
Sensors 24 02827 g002
Figure 3. IMU wearing mode.
Figure 3. IMU wearing mode.
Sensors 24 02827 g003
Figure 4. Experimental action flow chart.
Figure 4. Experimental action flow chart.
Sensors 24 02827 g004
Figure 5. Schematic diagram of single group action flow.
Figure 5. Schematic diagram of single group action flow.
Sensors 24 02827 g005
Figure 6. Decomposition example of abduction and raising of the arm.
Figure 6. Decomposition example of abduction and raising of the arm.
Sensors 24 02827 g006
Figure 7. Decomposition example of cross arm with palm clenched.
Figure 7. Decomposition example of cross arm with palm clenched.
Sensors 24 02827 g007
Figure 8. Original sEMG signal with more noise.
Figure 8. Original sEMG signal with more noise.
Sensors 24 02827 g008
Figure 9. Filtered sEMG signal waveform.
Figure 9. Filtered sEMG signal waveform.
Sensors 24 02827 g009
Figure 10. sEMG signal after feature extraction.
Figure 10. sEMG signal after feature extraction.
Sensors 24 02827 g010
Figure 11. sEMG eigenvalue extraction and normalization processing results.
Figure 11. sEMG eigenvalue extraction and normalization processing results.
Sensors 24 02827 g011
Figure 12. Network structure of the gesture recognition algorithm.
Figure 12. Network structure of the gesture recognition algorithm.
Sensors 24 02827 g012
Figure 13. Gesture recognition results.
Figure 13. Gesture recognition results.
Sensors 24 02827 g013
Figure 14. The neural network structure depicted in the figure is based on the research conducted in this paper.
Figure 14. The neural network structure depicted in the figure is based on the research conducted in this paper.
Sensors 24 02827 g014
Figure 15. Error curve of each joint movement: (a) Shoulder abduction/adduction. (b) Shoulder flexion/extension. (c) Elbow flexion/extension. (d) Forearm rotation. (e) Shoulder rotation.
Figure 15. Error curve of each joint movement: (a) Shoulder abduction/adduction. (b) Shoulder flexion/extension. (c) Elbow flexion/extension. (d) Forearm rotation. (e) Shoulder rotation.
Sensors 24 02827 g015
Figure 16. Comparison of target and predicted angles during joint movement: (a) Shoulder abduction/adduction. (b) Shoulder flexion/extension. (c) Elbow flexion/extension. (d) Forearm rotation. (e) Shoulder rotation.
Figure 16. Comparison of target and predicted angles during joint movement: (a) Shoulder abduction/adduction. (b) Shoulder flexion/extension. (c) Elbow flexion/extension. (d) Forearm rotation. (e) Shoulder rotation.
Sensors 24 02827 g016
Figure 17. Intersection curve of each joint (the combined action of abduction and raising of the arm): (a) Shoulder abduction/adduction. (b) Elbow flexion/extension. (c) Forearm rotation. (d) Shoulder flexion/extension and rotation.
Figure 17. Intersection curve of each joint (the combined action of abduction and raising of the arm): (a) Shoulder abduction/adduction. (b) Elbow flexion/extension. (c) Forearm rotation. (d) Shoulder flexion/extension and rotation.
Sensors 24 02827 g017
Figure 18. Intersection curve of each joint (the combined action of the cross arm with palm clenched): (a) Shoulder abduction/adduction. (b) Elbow flexion/extension. (c) Forearm rotation. (d) Shoulder flexion/extension and rotation.
Figure 18. Intersection curve of each joint (the combined action of the cross arm with palm clenched): (a) Shoulder abduction/adduction. (b) Elbow flexion/extension. (c) Forearm rotation. (d) Shoulder flexion/extension and rotation.
Sensors 24 02827 g018
Figure 19. Physical drawing of the humanoid manipulator.
Figure 19. Physical drawing of the humanoid manipulator.
Sensors 24 02827 g019
Figure 20. Schematic diagram of test system.
Figure 20. Schematic diagram of test system.
Sensors 24 02827 g020
Figure 21. Experiment on control of humanoid robot arm: (a) Perform water pouring action. (b) Restore initial state action.
Figure 21. Experiment on control of humanoid robot arm: (a) Perform water pouring action. (b) Restore initial state action.
Sensors 24 02827 g021
Table 1. Joint motion planning.
Table 1. Joint motion planning.
JointsMotionRange (°)
ShoulderAbduction/Adduction0–90
Flexion/Extension0–90
Internal rotation/External rotation0–90
ElbowFlexion/Extension0–120
Internal rotation/External rotation−30–150
Table 2. Gesture recognition rate of the combined action of abduction and raising of the arm.
Table 2. Gesture recognition rate of the combined action of abduction and raising of the arm.
MotionSuccessful RecognitionFailed RecognitionSucces s Rate
Palm relax48296%
Table 3. Gesture recognition rate of the combined action of the cross arm with palm clenched.
Table 3. Gesture recognition rate of the combined action of the cross arm with palm clenched.
MotionSuccessful RecognitionFailed RecognitionSuccess Rate
Palm clench49198%
Table 4. Manipulator control success rate.
Table 4. Manipulator control success rate.
Number12345Success Rate
A4/44/43/44/44/495%
B4/44/44/44/44/4100%
C4/44/44/44/44/4100%
D4/43/44/44/42/485%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, Y.; Zheng, G.; Zhang, H.; Zhao, B.; Sun, P. Mapping Method of Human Arm Motion Based on Surface Electromyography Signals. Sensors 2024, 24, 2827. https://doi.org/10.3390/s24092827

AMA Style

Zheng Y, Zheng G, Zhang H, Zhao B, Sun P. Mapping Method of Human Arm Motion Based on Surface Electromyography Signals. Sensors. 2024; 24(9):2827. https://doi.org/10.3390/s24092827

Chicago/Turabian Style

Zheng, Yuanyuan, Gang Zheng, Hanqi Zhang, Bochen Zhao, and Peng Sun. 2024. "Mapping Method of Human Arm Motion Based on Surface Electromyography Signals" Sensors 24, no. 9: 2827. https://doi.org/10.3390/s24092827

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop