Next Article in Journal
A Hybrid Visual-Based SLAM Architecture: Local Filter-Based SLAM with KeyFrame-Based Global Mapping
Next Article in Special Issue
A Wearable System Based on Multiple Magnetic and Inertial Measurement Units for Spine Mobility Assessment: A Reliability Study for the Evaluation of Ankylosing Spondylitis
Previous Article in Journal
A Novel Calibration Method for Gyro-Accelerometer Asynchronous Time in Foot-Mounted Pedestrian Navigation System
Previous Article in Special Issue
The Use of Synthetic IMU Signals in the Training of Deep Learning Models Significantly Improves the Accuracy of Joint Kinematic Predictions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Force Myography-Based Human Robot Interactions via Deep Domain Adaptation and Generalization

1
Menrva Research Group, Schools of Mechatronic Systems Engineering and Engineering Science, Simon Fraser University, Metro Vancouver, BC V5A 1S6, Canada
2
Biomedical and Mobile Health Technology Laboratory, ETH Zurich, Lengghalde 5, 8008 Zurich, Switzerland
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(1), 211; https://doi.org/10.3390/s22010211
Submission received: 15 November 2021 / Revised: 25 December 2021 / Accepted: 27 December 2021 / Published: 29 December 2021
(This article belongs to the Special Issue Wearable Sensors for Human Motion Analysis)

Abstract

:
Estimating applied force using force myography (FMG) technique can be effective in human-robot interactions (HRI) using data-driven models. A model predicts well when adequate training and evaluation are observed in same session, which is sometimes time consuming and impractical. In real scenarios, a pretrained transfer learning model predicting forces quickly once fine-tuned to target distribution would be a favorable choice and hence needs to be examined. Therefore, in this study a unified supervised FMG-based deep transfer learner (SFMG-DTL) model using CNN architecture was pretrained with multiple sessions FMG source data (Ds, Ts) and evaluated in estimating forces in separate target domains (Dt, Tt) via supervised domain adaptation (SDA) and supervised domain generalization (SDG). For SDA, case (i) intra-subject evaluation (Ds ≠ Dt-SDA, Ts ≈ Tt-SDA) was examined, while for SDG, case (ii) cross-subject evaluation (Ds ≠ Dt-SDG, Ts ≠ Tt-SDG) was examined. Fine tuning with few “target training data” calibrated the model effectively towards target adaptation. The proposed SFMG-DTL model performed better with higher estimation accuracies and lower errors (R2 ≥ 88%, NRMSE ≤ 0.6) in both cases. These results reveal that interactive force estimations via transfer learning will improve daily HRI experiences where “target training data” is limited, or faster adaptation is required.

1. Introduction

Force myography is a contemporary, non-invasive, wearable technology like the traditional surface electromyography (sEMG) and can read muscle contractions without requiring skin preparations or precautions. This technology is based on force sensing resistors (FSRs) that detect resistance changes when pressure is applied to them. An FMG band donned around a limb on the upper or lower extremities can be used to detect underlying muscle contractions during activities, and these signals can be interpreted using machine learning (ML) techniques [1]. Although sEMG technology has been around for several decades, the measured electrical activities of underlying muscles during movements of limbs are faint, requiring substantial and costly signal processing units and skin preparation for electrode placements [2]. In contrast, FMG technique is cost effective, repeatable, electrically robust, and requires minimal signal processing and optional feature engineering [3]. In addition, FMG technique was found effective, like sEMG, in several research studies [4,5,6,7] as an emerging technology and has been studied in similar applications of gesture recognition, prosthetic control, activities of daily life, rehabilitation, and human machine interactions (HMI) [8,9,10,11,12,13]. However, there are very few studies on FMG-based deep transfer learning (DL) techniques in human robot interactions (HRI). In a recent study, transfer learning for hand gesture classification using convolutional neural network (CNN) via FMG signals was investigated [14]. Authors in [15] showed improved gesture recognition accuracy via FMG-based transfer learning by incorporating multiple source domains from other persons. Only in one study on FMG-based pHRI, researchers implemented an FMG-based recurrent neural network (RNN) model to classify whether a hand movement pattern in a collaborative task with an industrial robot was user intended or random [16].
As an established technology, sEMG has been well studied in implementing deep transfer learning in HMI, HRI, and other applications. Like FMG, this signal is affected by electrode placement on the limb, electrode shift, intensity, and variability in muscle contraction during intra-/inter-session evaluation, limb motion, or postures during interactions [17]. Despite the transient nature of the signal, studies conducted on model generalization and domain adaptation showed impressive results [18,19]. Researchers trained a model with inductive and supervised transductive transfer learning for gesture classification using a random forest (RF) algorithm; the model was fine-tuned with short calibration data from an unseen participant and evaluated successfully with a humanoid Pepper robot [20]. High-density sEMG (HD-sEMG) signals were used for a deep domain adaptation framework and were found to improve inter-session gesture detection for unlabeled test data or fine-tuning of labelled calibration data [21]. Electrode shifts and day-to-day variability through adaptive transfer learning were investigated thoroughly [22,23]. Moreover, periodic recalibration with a small amount of training data was found effective in multiple days usage for prosthetic control by applying transfer learning [24]. In [25], the domain shift between data in each training trial (source domain) was evaluated for cross-subject elbow EMG-torque models and calibration data acquired from a new subject (target domain) using feature correlation. In [26], authors proposed supervised covariate shift adaption method using a small calibration set only. Furthermore, a recent study showed that aggregating source distributions from multiple users with deep transfer learning in gesture recognition enhanced the model’s performance [27]. Among the sEMG-based pHRI studies in the literature, few approached applied force and torque in dynamic motion because of complexities with regressions when calibrations were required. Since the FMG signal has similar characteristics, and there is a gap in the literature using transfer learning for the pHRI regression problem, this study focuses on force estimation using multiple session data sets via transfer learning.
In most industrial human-robot collaborative activities, applied hand forces are required to carry on certain tasks. Traditional force/torque (FT) sensors can read the applied force precisely although these are bulky, require special signal processing units, and are uncomfortable as a worn device. Estimating isometric or dynamic hand force/torque using FMG signals via FSRs were found favorable to FT sensors [28,29,30]. Recently, measuring force via FMG bio signals during physical human robot interactions (pHRI) between human participants and a linear robot was found effective for intra-session evaluation using traditional ML algorithms [31]. However, intra-session FMG-based pHRI required collecting adequate labelled training data, which was biased and impractical in real scenarios. In addition, each session data was affected by transient, instantaneous signals, sensor position shift, physiological changes, limb motions, and postures each time an FMG band was donned. Such domain shifts and lack of adequate data severely limited inter-session or inter-participant performance evaluations. In a recent study, inter-participant domain generalization via traditional support vector regressor (SVR) was investigated [32] although this study did not investigate deep transfer learning or intra-session evaluations when a participant interacted with a robot on regular basis.
Therefore, this study conducted a major investigation for feasibility of FMG-based HMI and HRI applications via deep transfer learning where interactions were expected to occur on regular basis. Transductive transfer learning (few target data available/seen) via supervised domain adaptation (SDA) for inter-session evaluation and inductive transfer learning (target data not available/unseen) via supervised domain generalization (SDG) for inter-participant evaluation was investigated to overcome limitations of intra-session evaluation [33,34,35]. Domain adaptation reuses part of a model pretrained with large pools of source domains to predict different but related target domain where both domains have same feature spaces with different distributions. On the other hand, domain generalization uses a pretrained model with source domains and attempts to predict unseen target data. It is particularly beneficial to mitigate gaps between different domains where knowledge about the target domain is absent [36,37]. These methods have been successfully applied in image processing, but there are very few studies in bio-signal-based pHRI because of transient and dynamic nature of bio feedback and hence needs to be investigated. In a repetitive FMG-based pHRI application between a participant and a robot, previous intra-sessions data could contribute building a large dataset. Due to the transient signal, sensors shift, and dynamic interactive environment, each session’s data were unique even when the task (applied force in certain motion) was the same. Therefore, the focus of this study was to investigate whether these multiple-source data could improve the user experience in daily interactions utilizing domain adaptation by pretraining a model and fine-tune via transfer learning. We further investigated the impact of domain generalization for a different pHRI task between the robot and several other participants (applied interacting force in another motion) using the same pretrained model. Such cross-subject evaluation became more challenging due to signal variability between the target distribution and the multiple intra-sessions source distributions. Fine-tuning the pretrained model via transfer learning could leverage the gap between the source and target domain. For both SDA and SDG, few calibration data (target training data) was used for fine-tuning the model to adapt instantaneous state of the signal captured during the dynamic interactions.
An FMG-based convolutional neural network (FMG-CNN) architecture was proposed to investigate pHRI between several human participants and a linear robot/stage via domain adaptation and generalization. This architecture was used as a nonlinear regression model to map applied forces from instantaneous FMG signals during interactions, as shown in Figure 1. For transfer learning, multiple source distributions were used to pretrain a unified supervised FMG-based deep transfer learner (SFMG-DTL) model during the training phase. These multiple sources of FMG distributions (source distribution: Ds) were collected in several sessions during regular pHRI activities between one human participant and the linear robot while the participant applied hand forces in a certain dynamic SQ-1 motion (source task: Ts). The SFMG-DTL model was assessed on separate cases during the evaluation phase on target domain 1 for supervised domain adaptation (case i: SDA) and on target domain 2 for supervised domain generalization (case ii: SDG). In case i, inter-session target domain 1 (Dt-SDA) was evaluated where the same participant (intra-subject) interacted with the linear robot in SQ-1 motion (Tt-SDA). While in case ii, inter-participant target domain 2 (Dt-SDG) was assessed separately for five (5) other participants (cross-subject) interacting with the linear robot in SQ-2 motions (Tt-SDG). In the beginning of evaluation for both cases, a few calibration data (target training data) were collected to fine-tune the pretrained model in recognizing target distribution. Intra-session evaluations on target domains (target training and target test data) were conducted using FMG-CNN architecture for comparing performances of SDA and SDG cases. Several machine learning algorithms, such as support vector regression (SVR) and multi-dimensional support vector regression (MSVR), were also used for performance comparison in domain adaptation.
Major contributions of this study were:
  • Investigating feasibility of deep transfer learning technique in repetitive FMG-based pHRI applications utilizing inter-session FMG data for the first time;
  • Proposing a unified transfer learner for both supervised domain adaptation and domain generalization;
  • Leveraging periodical calibration as needed with less data than normally required; and
  • Proposing a nonlinear FMG-CNN regression architecture for mapping applied force from FMG signals without requiring biomechanical modelling of the human arm.
The rest of this article is organized as follows: Section 2 describes the materials and methods, where methodology, experimental setup, and protocol used are explained. Results are discussed in Section 3. Performance evaluations of the proposed framework is discussed in Section 4, while Section 5 concludes this article.

2. Materials and Methods

2.1. Problem Statement

2.1.1. Source and Target Domain

In this study, multiple source domains, Dsi = {i = 1, 2, 3}, were used for pretraining a deep transfer learning model. Source domain Dsi = {χsj, Ysj} had data matrix χsjRNsj × SC such that i ∈ {1, 2, 3}, j ∈ {1, 2, ..., NS}, SC = {c1, ..., c32} (c: 32 FMG channels, SC = dimensionality of feature vectors, and NS: number of samples), and labels Ysj = {Fsjx, Fsjy, f (·)} [f (·) was a predictive function, and Fsjx, Fsjy were label space of applied forces in X and Y dimensions such that f: χsj → Fsj-x and f : χsj → Fsjy]. All distributions were homogenous and balanced. Target domain Dt = {χt} had data matrix χtRNt × SC [SC: dimensionality of feature vectors, and Nt: number of samples in target domain]. Calibration data, Cd ∈ {Xc, Yc} [Yc = {Fcx, Fcy, f (·)}], a small subset of Dt, was used as target training data. A transfer learner pretrained with Dsi and fine-tuned with Cd predicted force label spaces, Yt = {Ftx, Fty, f (·)}, from target test distribution: {χt*} ∈ Dt. In case of domain adaptation, source and target domains were different, but source and target tasks of applied force estimations in SQ-1 motion were same (Ds ≠ Dt, Ts = Tt) {Ts, Tt: applied interactive forces in SQ-1 motion}). While in domain generalization, both source and target domains and tasks were different (Ds ≠ Dt, and Ts ≠ Tt, where Ts: applied forces in SQ-1 motion and Tt: applied force in ‘SQ-2′ motion). Acronyms used in this article are listed in Table 1.

2.1.2. Applied Interaction Force Estimation

At an instant of time, t, instantaneous raw input target test signals SC arriving at the model (with a δ of µ parameter set) with a probability Pt (StC) mapped estimated applied force Fxt and Fyt (forces in X and Y dimensions) in a dynamic motion such that:
f x ( ) = F x t = δ , ( s t C , μ 1 )
    f y ( ) = F y t = δ , ( s t C , μ 2 )  
To find best parameter space µ, loss function was computed:
μ 1 =   L ( F x t F x t   ) = arg min µ 1 k = 1 t (   F x k F x k ) 2
μ 2 =   L ( F y t F y t   ) = arg min µ 2 k = 1 t (   F y k   F y k   ) 2  
Mean square error (MSE) was used to calculate average squared difference between estimated and real value. MSE for a single observation was:
M S E x = k = 1 R (   F x k F x k ) 2 R
M S E y = k = 1 R (   F y k F y k ) 2 R  
where R was the number of responses; Fxk, Fyk were the target output; and Fxk, Fyk were the network’s prediction for response k.

2.2. Experimental Setup

FMG-based pHRI was investigated where a human participant collaborated with a linear robot/biaxial stage, as shown in Figure 2. Interactions occurred by applying force at the end-effector of the robot. Two FMG bands (32 feature space) using FSRs (TPE 502C, Tangio Printed Electronics, North Vancouver, BC, Canada) were used to read muscle contractions during interactions using data acquisition systems (NI DAQs 6259, 6341, National Instruments, Austin, TX, USA). These bands were wrapped around the forearm and upper arm muscle belly. A customized linear robot or a cartesian planar robot had two perpendicular linear stages (X-LSQ450B, Zaber Technologies, Vancouver, BC, Canada) in the X and Y dimensions on the planar workspace with a customized gripper on top as the end-effector. The true label of applied force was recorded with a 6-axis FT sensor (Mini45, ATI Industrial Automation, Apex, NC, USA) that was mounted inside the gripper. Compliant collaboration was implemented via admittance control where the applied force was converted proportionally to the motor displacements of the linear stages. Therefore, the gripper would slide along the workspace, following the same trajectory of the human-applied force in dynamic motion and direction. The linear robot was fixed firmly on a table for interactions. An HP Zbook laptop (Intel core i7, 16GB RAM) was used for data collection via Labview interface and for model evaluations via Matlab scripts.

2.3. Proposed FMG-CNN Architecture

Figure 3 shows the proposed FMG-CNN architecture used in this study. Raw FMG signals were used for training and evaluating SDA and SDG. Two separate models had an input layer of input size 1 × 32 with “zerocenter” normalization, followed by Model X and Model Y, used for estimating forces in X and Y dimensions. Both model conv1 and conv2 convolutional blocks. Raw data was preprocessed using minmax scaling before passing to the input layer. In each convolution block, the conv layer was followed by a Relu and a batch normalization layer. For Model X, 32 filters were used in the conv1 block, while 64 filters were used for Model Y. The conv2 layer had 16 filters in both models. A fully connected layer with 20 connections followed the conv layers, and finally, a regression layer was used to map the instant force. Batch normalization helped to alleviate the internal covariance shifting present during training, as changes happened in input distributions of layers due to parameter changes in previous layers. Filters sized 3 × 3 with a stride of 1 and a padding of 1 was used. During evaluation, fine-tuning occurred in the final fully connected layer. For both pretraining and fine-tuning, stochastic gradient descent (SGD) was implemented as the optimizer. A learning rate (LR) of 1E-04 and maximum epoch (E) of 40 were used in pretraining, while LR = 1E-05 with E = 60 was used during evaluation. MSE loss was used for validation of the training process.
For transfer learning, a unified framework for SDA and SDG based on the FMG-CNN architecture was proposed, as shown in Figure 4. In this framework, the model learned discriminative features of the multiple source domains during pretraining. While fine tuning, the last three layers of the saved model helped in adapting to converge quickly in recognizing target distribution.

2.4. Protocol

A total of 6 participants (P1, …, P6) volunteered in this study. All participants were healthy, right-handed, and their average age was 33 ± 8 years. Informed consents were obtained from all subjects involved in the study, as approved by Office of Research Ethics, Simon Fraser University, British Columbia, Canada.
Figure 5 shows the training and evaluation phases followed in this study to investigate the proposed SFMG-DTL transfer learning model. Both source and target distributions and model hyper parameters used are summarized in Table 2. During the training phase, source distributions were collected and used for pretraining the model, while in the evaluation phase, separate target domains for SDA and SDG were collected and evaluated separately, as discussed below.

2.4.1. Training Phase

Multiple-Source Data Collection

Multiple training data collection sessions were conducted in three (3) different sessions during interactions between participant P1 and the linear robot. The collaborative task was conducted by applying hand force in a dynamic square motion SQ-1 of varying sizes on the planar surface, as shown in Figure 2. Participant P1 sat in front of the linear robot/biaxial stage comfortably on a chair locked in position.
Two FMG bands were donned on the forearm and upper arm on the participant’s dominant right hand (Figure 1 and Figure 2e). A total 14 cycles of data were collected during these sessions, where 600 × 32 samples of data were collected in a cycle. In each cycle, participant grasped the gripper and applied interactive force in a dynamic square motion, defined as the source task (TSDA = applied force in SQ-1 motion). Applying forces in a non-uniform anti-clockwise square motion with gradually increasing displacement area on the planar surface (Figure 2c) were repeated continuously to complete one cycle.

Pretraining Deep Learning Model

For domain adaptation and generalization, the proposed FMG-CNN architecture was used for pretraining the unified SFMG-DTL transfer learner model. The model was trained to predict applied forces in X and Y dimensions simultaneously from a distribution. Two separate models (Model X, Model Y) were generated for estimating forces in X and Y dimensions and saved as .mat file for use in evaluation sessions.

2.4.2. Evaluation Phase

Case i: Evaluating Intra-Subject/Inter-Session Target Domain (Dt-SDA, Tt-SDA) via Domain Adaptation (Ds ≠ Dt, Ts ≈ Tt)

Inter-session evaluation was investigated to see if multiple session data from a repetitive user (intra-subject/participant) could be useful in practical applications. In this target task, participant P1 interacted with the linear robot in similar motion speed and pattern SQ-1 following same source data collection protocol. For domain adaptation, first, a few calibration data were collected as target training data (1200 × 32 samples) for fine-tuning and formed target dataset 1. The transfer learner was thus retrained to adapt a new target domain. It was then evaluated on 400 × 32 samples of target test data.

Case ii: Evaluating Cross-Subject/Inter-Participant Target Domain (Dt-SDG, Tt-SDG) via Domain Generalization (Ds ≠ Dt, Ts ≠ Tt)

For domain generalization, five participants (P2:P6) contributed to evaluate the pretrained SFMG-DTL model. Target distributions were collected from each participant during a collaborative task that allowed interaction with the robot applying force in a uniform square motion (TSDG = applied force in SQ-2 motion), as shown in Figure 2d. For each participant, a total 4 cycles of target data (400 × 32 samples/cycle) were collected with similar source data collection protocol, and it was termed as target dataset 2. Leaving one out cross-validation (LOOCV) was implemented where 3 cycles were used as target training data for fine-tuning the SFMG-DTL model, and 1 cycle was used as target test data.

2.5. Performance Matrices

2.5.1. Statistical Tools and Tests

Performance of the SFMG-DTL model in estimating force in the dynamic motion was evaluated using the coefficient of determination (R2) and normalized root mean square error (NRMSE).
Coefficient of determination (R2) was obtained by:
R 2 = E x p l a i n e d   v a r i a t i o n T o t a l   v a r i a n c e
It was used to determine the correlations or dependencies of the dependent variable on the independent variable. R2 or goodness of fit values varied between 0 and 1.
NRMSE determined the fraction of RMSE (squared root of differences between predicted and real value) to the observed range of the measured data:
N R M S E = 1 n Σ i ( Y e Y i ) 2 m e a n ( Y )
where Y was the measured data, n was number of samples, and Ye was the prediction made by the regression model.
A t-test was performed to evaluate effectiveness of domain generalization. It was a statistical test to compare the means of two samples to determine the significance in change [38]. It helped to determine whether performance improvement using transfer learning with the SFMG-DTL model was statistically significant.

2.5.2. ML and DL Algorithms

For performance evaluation of SFMG-DTL model, intra-session evaluations were conducted on the two target domains using baseline FMG-CNN architecture. For intra-session evaluation, a baseline SDA and a baseline SDG model were trained with target training data and evaluated on the same target test data (as mentioned in Section 2.4.2 and Table 2). Intra-session evaluation used SGD optimizer and hyper parameters (LR = 1E-4, E = 60) for comparable performances. A traditional machine learning algorithm, such as support vector regression (SVR) and its variation multi-dimensional support vector regression (MSVR), was used for performance evaluation of SDA only. These algorithms also used the same target training and test data for comparison with SFMG-DTL. The popular SVR model (nu-SVR with hyper parameters: Cost (c) = 20, Gamma (g) = 1, Epsilon (ε) = 1) could predict continuous ordered variables either in linear or non-linear way. MSVR (c = 0.01:0.5:0.09, g = 0.8:0.2:1.5, ε = 0.08) was capable of estimating force in one direction while considering forces acting in other dimensions. For MSVR, instead of using separate model estimating force in each dimension, a single model was trained to predict forces. This model was investigated to determine if higher accuracies could be achieved while reducing computation resources and time. For both SVR and MSVR, best values for cost (c) and gamma (g) were obtained by grid searches. Separate models were generated to predict forces in the X and Y dimensions for SVR, intra-session, and SFMG-DTL, while only one MSVR model was trained for predicting forces in both dimensions. All models utilized radial basis function (RBF) kernel.

3. Results

For transfer learning in SDA and SDG, the SFMG-DTL pretrained model was evaluated with two separated target domains (in both cases, calibration data/target training data (1200 × 32 samples) and target test data (400 × 32 samples) were of same amount). Figure 6 shows plots of target domain 1: FMG test distributions and the model’s performance of force estimations in X and Y dimensions during SDA.

3.1. Supervised Domain Adaptation

Supervised domain adaptation was investigated for inter-session FMG data for repetitive pHRI application with participant P1. The results obtained for R2 and NRMSE with the SFMG-DTL model along with other models are reported in Figure 7. The proposed deep transfer learner (MSE loss ≈ 5.8) outperformed in estimating force in the selected motion SQ-1 in terms of higher accuracies (R2 ≈ 89%) and lower error (NRMSE ≈ 0.10) than other algorithms, including intra-session baseline SDA’(FMG-CNN model with target training data and target test data only). Among these models, MSVR performed poorly (R2 ≈ 52%) despite using a single model to predict force in both X and Y dimensions. Both baseline SDA and SVR showed similar results in predicting force (R2 ≥ 81%). Reported values were averaged for Model X and Model Y in estimation accuracies and losses.

3.2. Supervised Domain Generalization

Supervised domain generalization was evaluated for inductive transfer learning where the target distributions were unseen to the pretrained model. An inter-participant/cross-subject test was carried out for five participants (P2:P6) individually. For comparison, intra-session baseline SDG, using leave one out cross-validation (LOOCV) with target training data and target test data, was executed for each participant. The SFMG-DTL model obtained comparable estimation accuracies (R2 ≥ 88%) similar to the baseline SDG model (R2 ≤ 86%) across participants. Thus, performance with transfer learning obtained 2.4% improvement in estimating forces in dynamic SQ-2 motion. Moreover, the SFMG-DTL model encountered an error in estimation (NRMSE ≈ 0.6) that was 3.75% lower than the intra-session model across participants (mean MSE loss ≈ 5.14 N). Individual results of R2 and NRMSE (averaged for Model X and Model Y) are reported in Figure 8 for all five participants.

4. Discussions

4.1. Viability of Calibration

The pretrained SFMG-DTL model was further retrained with a few calibration data sets to adapt to the target domain. The model worked well for both SDA and SDG once fine-tuned with calibration/target training data. To investigate the effect of calibration during SDA, the pretrained model was evaluated on target test data without fine-tuning towards target distribution. It was interesting that the pretrained model without fine-tuning could predict forces in X dimension with higher estimation accuracy and lower error (R2 ≥ 89%, NRMSE ≈ 0.09%) although it could not estimate well in Y dimension (R2 ≤ 12%, NRMSE ≥ 8%) with no adaptation to target domain. For SDG, similar trends were observed in X dimension (R2 ≥ 89%, NRMSE ≈ 0.09%) and Y dimension (R2 ≤ 25%, NRMSE ≥ 6%). Muscle contractions in extension/flexion (X dimensions) and abduction/adduction (Y dimensions) could affect FSR readings and model’s performances although this would require further study. Therefore, it was revealed that fine-tuning with calibration data was mandatory for estimating forces in 2D planar SQ-1 motion for SDA as well as in SQ-2 motion for SDG.
For compliant collaboration, applied forces in both dimensions were needed to be estimated well simultaneously. Therefore, the proposed framework would not work without calibration data. The calibration data represented the instantaneous FMG data of muscle contraction during interactions, and it was found as an effective way to include the current state of muscle readings in certain activities during pHRI. Additionally, using fewer calibration data sets was helpful, as the model was calibrated within few minutes.

4.2. Viability of SDG

In this case, estimation accuracies and errors obtained by SFMG-DTL model were found comparable with intra-session evaluation of baseline SDG for participants P2 and P6, while it performed better for P3–P5. Although the overall performance improvement was limited, it was interesting that the SFMG-DTL model improved accuracies in estimating force in the Y dimension compared to the baseline SDG model for some participants, as shown in Figure 9. A t-test was carried out with a 95% confidence level to compare performances of the intra-session and the SFMG-DTL model. Estimation accuracies (R2) in Y-dimension via the SFMG-DTL model were found statistically significant. This would improve designing FMG-based HMI in future practical applications.

5. Conclusions

Estimating applied hand force using force myography (FMG) can be effective yet challenging due to the transient, time-variant nature of the bio signal. Controlling machines using data-driven models in HMI or pHRI over multiple days are affected by sensor position shifts and/or physiological effects. This study investigated multiple sessions of labelled FMG data to overcome such inherent challenges by pretraining a deep learning model using a CNN algorithm. Calibration data from individual participants allowed the pretrained model to be fine-tuned towards individual target distribution and to adapt the target task. The proposed SFMG-DTL model was evaluated in both domain adaptive and domain-generative transfer learning scenarios and obtained better prediction accuracies and lower losses. The model obtained estimation accuracies (R2) of 89% and 88.4% in SDA and SDG, respectively. In both cases, SFMG-DTL outperformed the SVR, MSVR, and intra-session models. Performance of the pretrained deep transfer model achieved improvements over the intra-session model for both intra-subject and cross-subject evaluations (6% and 2.4% increase in estimation in SDA and SDG, respectively).
Although SDA and SDG showed potential improvements, achieving these in real-time situations needs to be examined. In addition, in practical scenarios, collecting labelled data is not easy or sometimes impossible. Therefore, unsupervised domain adaptations in challenging situations could be investigated in the future studies. The SFMG-DTL model performed well for domain generalization but was limited to a certain pHRI collaborative task. A pretrained model using more diversified source domains would play a vital role in improving domain generalization and extend to all other possible interactions. Such a pretrained model can be useful for unseen target domains where the target label data are scarce or inadequate in real scenarios. Moreover, an FMG-based transfer learner can be more practical for domain adaptation to implement an FMG-based application either for one-time or periodic usage by overcoming sensor position shifts on multiple elapsed days.

Author Contributions

U.Z. investigated, collected, and analyzed the FMG-based force estimation in intended arm motions via deep transfer learning and wrote the paper. C.M. supervised the project, contributed to discussions and analysis, and participated in manuscript revisions. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canadian Institutes of Health Research (CIHR), and the Canada Research Chair (CRC) program.

Institutional Review Board Statement

The study was conducted in accordance with the research ethics (Study Number: 20180244, Amendment Approval Date: 19 July 2021) approved by the Office of Research Ethics, Simon Fraser University, British Columbia, Canada.

Informed Consent Statement

Informed consents were obtained from all participants involved voluntarily in the study.

Data Availability Statement

Data are available upon request to the corresponding author.

Acknowledgments

We would like to thank members of Menrva Research Group for assisting in this project.

Conflicts of Interest

The authors declare no conflict of interest. The principal investigator, Carlo Menon, and members of his research team have a vested interest in commercializing the technology tested in this study if it is proven to be successful and may benefit financially from its potential commercialization. The data are readily available upon request.

References

  1. Xiao, Z.G.; Menon, C. Towards the development of a wearable feedback system for monitoring the activities of the upper-extremities. J. NeuroEng. Rehabil. 2014, 11, 2. [Google Scholar] [CrossRef] [Green Version]
  2. Li, Y.; Zhang, Q.; Zeng, N.; Chen, J.; Zhang, Q. Discrete hand motion intention decoding based on transient myoelectric signals. IEEE Access 2019, 7, 81360–81369. [Google Scholar] [CrossRef]
  3. Duan, F.; Dai, L.; Chang, W.; Chen, Z.; Zhu, C.; Li, W. sEMG-based identification of hand motion commands using wavelet neural network combined with discrete wavelet transform. IEEE Trans. Indust. Elect. 2016, 63, 1923–1934. [Google Scholar] [CrossRef]
  4. Allard, U.C.; Nougarou, F.; Fall, C.L.; Giguère, P.; Gosselin, C.; Laviolette, F.; Gosselin, B. A convolutional neural network for robotic arm guidance using sEMG based frequency-features. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2016), Daejeon, Korea, 9–14 October 2016; pp. 2464–2470. [Google Scholar]
  5. Meattini, R.; Benatti, S.; Scarcia, U.; De Gregorio, D.; Benini, L.; Melchiorri, C. An sEMG-based human–robot interface for robotic hands using machine learning and synergies. IEEE Trans. Compon. Packag. Manuf. Tech. 2018, 8, 1149–1158. [Google Scholar] [CrossRef]
  6. Oskoei, M.A.; Hu, H. Myoelectric control systems—A survey. Biomed. Signal Process. Control 2007, 2, 275–294. [Google Scholar] [CrossRef]
  7. Xiao, Z.G.; Menon, C. A review of force myography research and development. Sensors 2019, 19, 4557. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Jiang, X.; Merhi, L.K.; Xiao, Z.G.; Menon, C. Exploration of force myography and surface electromyography in hand gesture classification. Med. Eng. Phys. 2017, 41, 63–73. [Google Scholar] [CrossRef]
  9. Belyea, A.; Englehart, K.; Scheme, E. FMG Versus EMG: A comparison of usability for real-time motion recognition based control. IEEE Trans. Biomed. Eng. 2019, 66, 3098–3104. [Google Scholar] [CrossRef] [PubMed]
  10. Radmand, A.; Scheme, E.; Englehart, K. High-density force myography: A possible alternative for upper-limb prosthetic control. J. Rehab. R. D. (JRRD) 2016, 53, 443–456. [Google Scholar] [CrossRef]
  11. Ha, N.; Withanachchi, G.P.; Yihun, Y. Performance of Forearm FMG for Estimating Hand Gestures and Prosthetic Hand Control. J. Bionic Eng. 2019, 16, 88–98. [Google Scholar] [CrossRef]
  12. Godiyal, A.K.; Verma, H.K.; Khanna, N.; Joshi, D. A force myography-based system for gait event detection in overground and ramp walking. IEEE Trans. Instrum. Meas. 2018, 67, 2314–2323. [Google Scholar] [CrossRef]
  13. Godiyal, A.K.; Mondal, M.; Joshi, S.D.; Joshi, D. Force Myography Based Novel Strategy for Locomotion Classification. IEEE Trans. Human-Mach. Syst. 2018, 48, 648–657. [Google Scholar] [CrossRef]
  14. Zakia, U.; Jiang, X.; Menon, C. Deep learning technique in recognizing hand grasps using FMG signals. In Proceedings of the 2020 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 4–7 November 2020; pp. 0546–0552. [Google Scholar] [CrossRef]
  15. Andersen, T.B.; Eliasen, R.; Jarlund, M.; Yang, B. Force myography benchmark data for hand gesture recognition and transfer learning. arXiv 2020, arXiv:2007.14918. [Google Scholar]
  16. Anvaripour, M.; Khoshnam, M.; Menon, C.; Saif, M. FMG- and RNN-Based Estimation of Motor Intention of Upper-Limb Motion in Human-Robot Collaboration. Front. Robot. AI 2020, 7, 183. [Google Scholar] [CrossRef]
  17. Scheme, E.; Englehart, K. Electromyogram pattern recognition for control of powered upper-limb prostheses: State of the art and challenges for clinical use. J. Rehab. R. D. 2011, 48, 643–659. [Google Scholar] [CrossRef]
  18. Côté-Allard, U.; Fall, C.L.; Drouin, A.; Campeau-Lecours, A.; Gosselin, C.; Glette, K.; Gosselin, B. Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning. IEEE Trans. Neural Sys. Rehab. Eng. 2019, 27, 760–771. [Google Scholar] [CrossRef] [Green Version]
  19. Côté-Allard, U.; Fall, C.L.; Campeau-Lecours, A.; Gosselin, C.; Laviolette, F.; Gosselin, B. Transfer learning for sEMG hand gestures recognition using convolutional neural networks. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; pp. 1663–1668. [Google Scholar] [CrossRef]
  20. Kobylarz, J.; Bird, J.J.; Faria, D.R.; Ribeiro, E.P.; Ekárt, A. Thumbs up, thumbs down: Non-verbal human-robot interaction through real-time EMG classification via inductive and supervised transductive transfer learning. J. Ambient Intell. Human Comput. 2020, 11, 6021–6031. [Google Scholar] [CrossRef] [Green Version]
  21. Du, Y.; Jin, W.; Wei, W.; Hu, Y.; Geng, W. Surface EMG-Based Inter-Session Gesture Recognition Enhanced by Deep Domain Adaptation. Sensors 2017, 17, 458. [Google Scholar] [CrossRef] [Green Version]
  22. Kanoga, S.; Matsuoka, M.; Kanemura, A. Transfer Learning Over Time and Position in Wearable Myoelectric Control Systems. In Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 1–4. [Google Scholar] [CrossRef]
  23. Prahm, C.; Schulz, A.; Paaßen, B.; Schoisswohl, J.; Kaniusas, E.; Dorffner, G.; Aszmann, O. Counteracting Electrode Shifts in Upper-Limb Prosthesis Control via Transfer Learning. IEEE Trans. Neural Sys. Rehab. Eng. 2019, 27, 956–962. [Google Scholar] [CrossRef] [Green Version]
  24. Prahm, C.; Paassen, B.; Schulz, A.; Hammer, B.; Aszmann, O. Transfer Learning for Rapid Re-calibration of a Myoelectric Prosthesis After Electrode Shift. In Converging Clinical and Engineering Research on Neurorehabilitation II. Biosystems & Biorobotics; Springer: Berlin/Heidelberg, Germany, 2017; p. 15. [Google Scholar] [CrossRef] [Green Version]
  25. Jiang, X.; Bardizbanian, B.; Dai, C.; Chen, W.; Clancy, E.A. Data Management for Transfer Learning Approaches to Elbow EMG-Torque Modeling. IEEE Trans. Biomed. Eng. 2021, 2592–2601. [Google Scholar] [CrossRef]
  26. Vidovic, M.M.C.; Hwang, H.J.; Amsüss, S.; Hahne, J.M.; Farina, D.; Müller, K.R. Improving the Robustness of Myoelectric Pattern Recognition for Upper Limb Prostheses by Covariate Shift Adaptation. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 24, 961–970. [Google Scholar] [CrossRef]
  27. Côté-Allard, U.; Gagnon-Turcotte, G.; Phinyomark, A.; Glette, K.; Scheme, E.J.; Laviolette, F.; Gosselin, B. Unsupervised Domain Adversarial Self-Calibration for Electromyography-Based Gesture Recognition. IEEE Access 2020, 8, 177941–177955. [Google Scholar] [CrossRef]
  28. Jiang, X.; Merhi, L.K.; Menon, C. Force exertion affects grasp classification using force myography. IEEE Trans. Human-Mach. Syst. 2017, 48, 219–226. [Google Scholar] [CrossRef]
  29. Sakr, M.; Jiang, X.; Menon, C. Estimation of user-applied isometric force/torque using upper extremity force myography. Front. Robot. AI 2019, 6, 120. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Sakr, M.; Menon, C. Exploratory evaluation of the force myography (fmg) signals usage for admittance control of a linear actuator. In Proceedings of the IEEE International Conference on Biomedical Robotics and Biomechatronics, Twente, The Netherlands, 26–29 August 2018; pp. 903–908. [Google Scholar] [CrossRef]
  31. Zakia, U.; Menon, C. Estimating exerted hand force via force myography to interact with a biaxial stage in real-time by learning human intentions: A preliminary investigation. Sensors 2020, 20, 2104. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Zakia, U.; Menon, C. Toward Long-Term FMG Model-Based Estimation of Applied Hand Force in Dynamic Motion During Human–Robot Interactions. IEEE Trans. Human-Mach. Syst. 2021, 51, 310–323. [Google Scholar] [CrossRef]
  33. Ghifary, M.; Balduzzi, D.; Kleijn, W.B.; Zhang, M. Scatter Component Analysis: A Unified Framework for Domain Adaptation and Domain Generalization. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1414–1430. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Wouter, K.; Marco, L. An introduction to domain adaptation and transfer learning. arXiv 2018, arXiv:1812.11806. [Google Scholar]
  35. Pan, S.; Yang, J. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2009, 22, 1345–1359. [Google Scholar] [CrossRef]
  36. Yang, Q.; Zhang, Y.; Dai, W.; Pan, S.J. Transfer Learning; Cambridge University Press: Cambridge, UK, 2020. [Google Scholar]
  37. Motiian, S.; Piccirilli, M.; Adjeroh, D.A.; Doretto, G. Unified Deep Supervised Domain Adaptation and Generalization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 5716–5726. [Google Scholar] [CrossRef] [Green Version]
  38. Field, A. Discovering Statistics Using IBM SPSS, 5th ed.; Sage: Los Angeles, LA, USA, 2017. [Google Scholar]
Figure 1. The proposed SFMG-DTL transfer learning model for estimating applied force during pHRI on a planar workspace with a linear robot.
Figure 1. The proposed SFMG-DTL transfer learning model for estimating applied force during pHRI on a planar workspace with a linear robot.
Sensors 22 00211 g001
Figure 2. Setup used for data collection and evaluation of SFMG-DTL: (a) linear robot with gripper and end-effector on top, (b) two FMG bands, (c) interaction force in square motion SQ-1 with variable sizes in domain adaptation, (d) interaction force in square motion SQ-2 in domain generalization, and (e) participant P1 interacting with the robot by applying force in dynamic SQ-1 motion.
Figure 2. Setup used for data collection and evaluation of SFMG-DTL: (a) linear robot with gripper and end-effector on top, (b) two FMG bands, (c) interaction force in square motion SQ-1 with variable sizes in domain adaptation, (d) interaction force in square motion SQ-2 in domain generalization, and (e) participant P1 interacting with the robot by applying force in dynamic SQ-1 motion.
Sensors 22 00211 g002
Figure 3. Proposed FMG-CNN architecture (Model X, Y).
Figure 3. Proposed FMG-CNN architecture (Model X, Y).
Sensors 22 00211 g003
Figure 4. FMG-based transfer learning: (a) estimating applied interactive forces via SDA and SDG and (b) fine-tuning process of the pretrained SFMG-DTL model.
Figure 4. FMG-based transfer learning: (a) estimating applied interactive forces via SDA and SDG and (b) fine-tuning process of the pretrained SFMG-DTL model.
Sensors 22 00211 g004
Figure 5. SFMG-DTL: unified transfer learning framework for SDA and SDG.
Figure 5. SFMG-DTL: unified transfer learning framework for SDA and SDG.
Sensors 22 00211 g005
Figure 6. Target dataset 1 used in SDA: (a) target test FMG data; (b) true forces and estimated forces in X and Y dimensions estimated by the retrained SFMG-DTL learner.
Figure 6. Target dataset 1 used in SDA: (a) target test FMG data; (b) true forces and estimated forces in X and Y dimensions estimated by the retrained SFMG-DTL learner.
Sensors 22 00211 g006
Figure 7. Performances of ML and DL models in case i: on target dataset 1 (supervised domain adaptation): (a) estimation accuracies (R2) and (b) error in estimation (NRMSE). Averaged values (Model X and Model Y) are reported for SVR, intra-session, and SFMG-DTL.
Figure 7. Performances of ML and DL models in case i: on target dataset 1 (supervised domain adaptation): (a) estimation accuracies (R2) and (b) error in estimation (NRMSE). Averaged values (Model X and Model Y) are reported for SVR, intra-session, and SFMG-DTL.
Sensors 22 00211 g007
Figure 8. Performances of ML and DL models in case ii: on target dataset 2 (supervised domain generalization): (a) estimation accuracies (R2) and (b) error in estimation (NRMSE). Averaged values (Model X and Model Y) are reported for both intra-session and SFMG-DTL.
Figure 8. Performances of ML and DL models in case ii: on target dataset 2 (supervised domain generalization): (a) estimation accuracies (R2) and (b) error in estimation (NRMSE). Averaged values (Model X and Model Y) are reported for both intra-session and SFMG-DTL.
Sensors 22 00211 g008
Figure 9. Comparing SFMG-DTL model with intra-session evaluation on case ii: target dataset 2 model in estimating forces in X and Y dimensions in domain generalization.
Figure 9. Comparing SFMG-DTL model with intra-session evaluation on case ii: target dataset 2 model in estimating forces in X and Y dimensions in domain generalization.
Sensors 22 00211 g009
Table 1. Acronyms used.
Table 1. Acronyms used.
AcronymsMeaningAcronymsMeaning
SDASupervised domain adaptationSQ-1 Interaction force in square motion with variable sizes in domain adaptation
SDGSupervised domain generalizationSQ-2 Interaction force in square motion in domain generalization
DsSource domainDt-SDA, Tt-SDATarget domain and target task in inter-session SDA
DtTarget domainDt-SDG, Tt-SDGTarget domain amd target task in inter-participant SDG
TsSource taskDsiMultiple source domains
TtTarget taskFxt Estimated applied forces in X dimension
CdCalibration dataFyt Estimated applied forces in Y dimension
Table 2. Source and target domains.
Table 2. Source and target domains.
Pretraining PhaseEvaluation Phase
Source
Domain
Hyper
Parameters
Target
Domain
Hyper
Parameters
Fine
Tuning
Target Test Data
D s i =
{Xs, Ys} {P1},
where,
D s i = { D s 1   D s 2   D s 3 }
= 8400 × 32 samples,
TSDA: applied force
in SQ-1 motion
SGD
Epochs: 40
LR: 1E-4
case i. Dt-SDA=
{Xs, Ys} {P1} where TSDA: applied force in SQ-1 motion
SGD
Epochs: 60
LR: 1E-5
Cd =
{Xc, Yc}
1200 × 32 samples
Dt =
{Xt, Yt}
400 × 32 samples
case ii. Dt-SDG =
{Xs, Ys} {P2, …, P6}, where
TSDG: applied force in SQ-2 motion
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zakia, U.; Menon, C. Force Myography-Based Human Robot Interactions via Deep Domain Adaptation and Generalization. Sensors 2022, 22, 211. https://doi.org/10.3390/s22010211

AMA Style

Zakia U, Menon C. Force Myography-Based Human Robot Interactions via Deep Domain Adaptation and Generalization. Sensors. 2022; 22(1):211. https://doi.org/10.3390/s22010211

Chicago/Turabian Style

Zakia, Umme, and Carlo Menon. 2022. "Force Myography-Based Human Robot Interactions via Deep Domain Adaptation and Generalization" Sensors 22, no. 1: 211. https://doi.org/10.3390/s22010211

APA Style

Zakia, U., & Menon, C. (2022). Force Myography-Based Human Robot Interactions via Deep Domain Adaptation and Generalization. Sensors, 22(1), 211. https://doi.org/10.3390/s22010211

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop