Next Article in Journal
The Quantitative Inhibition Effects of Meteorological Drought on Sugarcane Growth Using the Decision Support System for Agrotechnology Transfer-CANEGRO Model in Lai-bin, China
Previous Article in Journal
A Comprehensive Analysis of Beekeeping Risks and Validation of Biosecurity Measures against Major Infectious Diseases in Apis mellifera in Europe
Previous Article in Special Issue
Optimized Design of Robotic Arm for Tomato Branch Pruning in Greenhouses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human–Robot Skill Transferring and Inverse Velocity Admittance Control for Soft Tissue Cutting Tasks

1
College of Engineering, China Agricultural University, Beijing 100083, China
2
State Key Laboratory of Intelligent Agricultural Power Equipment, Beijing 100083, China
3
Key Laboratory of Agricultural Equipment for Conservation Tillage, Ministry of Agricultural and Rural Affairs, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Agriculture 2024, 14(3), 394; https://doi.org/10.3390/agriculture14030394
Submission received: 18 December 2023 / Revised: 25 February 2024 / Accepted: 27 February 2024 / Published: 29 February 2024
(This article belongs to the Special Issue Agricultural Collaborative Robots for Smart Farming)

Abstract

:
Robotic meat cutting is increasingly in demand in meat industries due to safety issues, labor shortages, and inefficiencies. This paper proposes a multi-demonstration human–robot skill transfer framework to address the flexible and generalized cutting of sheep hindquarters with complex 3D anatomy structures by imitating humans. To improve the generalization with meat sizes and demonstrations and extract target cutting behaviors, multi-demonstrations of cutting are encoded into low-dimension latent space through principal components analysis (PCA), Gaussian mixture model (GMM), and Gaussian mixture regression (GMR). To improve the robotic cutting flexibility and the cutting behavior reproducing accuracy, this study combines a modified dynamic movement primitive (DMP) high-level behavior generator with the low-level joints admittance control (AC) through real-time inverse velocity (IV) kinematics solving and constructs the IVAC-DMP control module. The experimental results show that the maximum residual meat thickness in the sheep hindquarter cutting of sample 1 is 3.1 mm, and sample 2 is 3.8 mm. The residual rates of samples 1 and 2 are 5.6% and 4.8%. Both meet the requirements for sheep hindquarter separation. The proposed framework is advantageous for harvesting high-value meat products and providing a reference technique for robot skill learning in interaction tasks.

1. Introduction

Meat cutting is a crucial component of the meat industry’s value chain. This process involves breaking down a carcass into primal parts and precisely harvesting meat cuts with varying values [1,2]. Skilled workers with a profound understanding of animal anatomy are vital to guaranteeing the final cuts’ consistency and quality and determining their market prices. However, the traditional manual cutting methods have several issues, including biological variation, training, sanitation concerns, and harsh working conditions [3]. With the recent labor shortages, supply inefficiencies caused by COVID-19, and other limitations, the meat industry has seen a considerable increase in the need for automation. Robotic solutions offer a promising approach to address these challenges by enabling human-like cutting operations, reducing labor costs, improving food safety, and promoting a sustainable meat supply [4].
Robotic cutting is achieved through three main methods [5], i.e., primal sawing, 2D-space, and 3D-space cutting. The primal sawing method is efficient in primal cutting and middle pieces separation [6]. However, the simple sawing actions are insufficient for harvesting meat cuts from forequarters, hindquarters, rib cages, and carcass flanks. The 2D-space methods have been primarily adopted for precisely cutting primal pieces [7,8], such as carcass flanks or rib cages, utilizing deep learning and multi-DoF robot systems. The 2D-space methods are highly limited in cutting meat with more complex 3D anatomy structures, e.g., sheep hindquarters. The motion planning and control of robot cutting for sheep hindquarter separation is difficult due to several factors, i.e., cutting scenario variability, cutting close to the bone, variable bone size, and complex anatomic structures [9]. Compliant cutting in a 3D space can precisely separate meat tissue from bones or different meat tissues from each other. Xie et al. [10] proposed an admittance control method to cut along the ischial bone with comparable efficiency and accuracy to humans for sheep hindquarter cutting. However, the cutting path was constrained on several planes around the bone, which limited the flexibility of the cutting. Meat and Livestock Australia (MLA) proposed a force-controlled robot for sheep hindquarter cutting [11]. The robot equipped a blade with an optimized shape to lower the cutting forces. However, the generalization of the cutting motions is not addressed. Furthermore, a robotic prototype was proposed to cut the shoulders of the sheep forequarter [12] through an oscillating cutting motion and force control. It is a problem to generalize the oscillating cutting pattern between the forequarters and hindquarters. In addition, Nabil et al. [13] utilized a finite element model to accurately predict the deformation of the meat tissue during cutting and control the robot’s motion. Although the study allows for precise tracking of the fascial tissues between the meat tissues of a beef hind leg, the bones were not considered. More work is needed to achieve flexible and generalized motion planning while coupling with the force control.
Robots have been trained to execute flexible motions in interaction tasks based on the human–robot skill transfer (HRST) framework [14]. The framework involves a high-level nonlinear dynamical system such as a dynamic movement primitive (DMP) and a low-level compliant controller [15]. The nonlinear dynamical system combines a set of radial basis functions (RBFs) that mimic the muscle unit behaviors in humans to generate flexible motion trajectories [16]. Meanwhile, the low-level controller is an impedance controller(IC) or admittance controller (AC) for the robot joints, which handles the interaction between the robot and objects [17,18]. The DMP model has been coupled with feedback terms related to the interaction force to form a Cartesian space AC [19,20] in grinding, handling, and robot collaboration tasks. However, the method was highly limited by inverse kinematics solving (IKS), since the desired trajectories were modulated online. In the primary research [21], we utilized the DMP model to generate the desired movement in sheep hindquarter separation and the inverse velocity (IV) algorithm for online IKS of the robot with AC. However, the low-level joint control has no feedback to the DMP model, which limits the flexibility of the method. The studies mentioned above consider joint space control and high-level planning separately, so the robotic cutting flexibility and the cutting behavior reproducing accuracy are highly limited. More studies combining low-level joint AC and the high-level DMP behavior system by real-time IV kinematics solving are in demand to improve the integration, consistency, and efficiency of the collaboration between the two systems.
Moreover, The studies mentioned above are single-demonstrated, whose generalization is a crucial issue due to the cutting objects’ variable types, locations, and dimensions. The generalization can be addressed through multi-demonstration HRST. Gams et al. [22] proposed the compliant movement primitive (CMP) model to learn the interactive forces and motion trajectories and utilized the Gaussian process regression (GPR) model to encode multi-demonstrations. Low motion tracking error and compliance control were achieved. In addition, Wu and Billard [23] combined the Stable Estimator of Dynamical Systems) and Gaussian mixture model (GMM) and Gaussian mixture regression (GMR) to learn a compliant cutting behavior model from multiple demonstrations. The model was then integrated into the IC of the robot for cutting single-layered skin-like soft tissue and responding to human interactions. The studies mentioned above are implemented under an IC framework. An expensive lightweight backdrivable robot arm is needed to produce joint torques precisely. In the task of cutting hindquarters, HRST through multi-demonstration on a robot with AC and real-time IV remains an open problem.
To address the efficient collaboration between human-like behavior generation and joint-space compliant control and achieve flexible and generalized hindquarter cutting, this paper proposes a multi-demonstration HRST framework. To improve the generalization with meat sizes and demonstrations and extract target cutting behaviors, multi demonstrations of cutting are encoded into a low-dimension latent space through principal components analysis (PCA), Gaussian mixture model (GMM), and Gaussian mixture regression (GMR). To improve the robotic cutting flexibility and the cutting behavior reproducing accuracy, this study combines a modified dynamic movement primitive (DMP) high-level behavior generator with the low-level joints admittance control (AC) through real-time inverse velocity (IV) kinematics solving and constructs the IVAC-DMP control module. The proposed framework is evaluated in four cutting scenarios. The proposed method has the potential to reduce the cutting force, minimize meat loss, and increase meat yield, which is advantageous for harvesting high-value meat products. Additionally, it can serve as a reference technique for flexible and generalized robot skill learning in interaction tasks.
The rest of the paper is structured as follows. The proposed framework is presented in Section 2. Section 2.2 presents the data acquisition and preprocessing. The multi-demonstration human–robot skill transfer with GMM-GMR is then explained in Section 2.3. Moreover, Section 2.4 demonstrates the IVAC of robot joints. The evaluation and the discussion are presented in Section 3. The conclusion is provided in Section 4.

2. Materials and Methods

This section provides a detailed description of the proposed framework for imitating and controlling the cutting of soft tissue by robots.

2.1. System Overview

2.1.1. Cutting Behavior Representation

The cutting behavior is represented by h d ( t ) ,
H d = [ h d ( t 1 ) , h d ( t 2 ) , , h d ( t M ) ] , F d = [ f d ( t 1 ) , f d ( t 2 ) , , f d ( t M ) ] , T d = [ t 1 , t 2 , , t M ] , h d ( t ) = [ x d ( t ) , q d ( t ) ] T , x d ( t ) = [ x d , x ( t ) , x d , y ( t ) , x d , z ( t ) ] T , q d ( t ) = [ q d , w ( t ) , q d , x ( t ) , q d , y ( t ) , q d , z ( t ) ] T , f d ( t ) = [ f d , x ( t ) , f d , y ( t ) , f d , z ( t ) , τ d , x ( t ) , τ d , x ( t ) , τ d , x ( t ) ] T ,
where t is the time, and H d is the collected time series of cutting behavior. M is the number of time points in the time series. F d is the collected time series of cutting force. T d is the time points series. The position of the cutting tool in the world coordinate system is represented by x d ( t ) , which has x d , x ( t ) , x d , y ( t ) , x d , z ( t ) as coordinates for the x , y , z axes, respectively. The quaternion of the cutting tool orientation in the world coordinate system is represented by q d ( t ) , which consists of q d , w ( t ) as the scaler part and q d , x ( t ) , q d , y ( t ) , q d , z ( t ) as the imaginary part. The cutting force/torque measured by the F/T sensor is represented by f d ( t ) , where f d , x ( t ) , f d , y ( t ) , f d , z ( t ) are the decoupled measured forces of x , y , z while τ d , x ( t ) , τ d , x ( t ) , τ d , x ( t ) are the decoupled measured torques of the x , y , z axes, respectively.

2.1.2. Process of Training and Control

The process of imitation learning and control is divided into three stages, as shown in Figure 1. By giving multiple groups of human behavior time-series samples in the same cutting scenario, we train a GMM in the probability distribution of human behavior at each time point. Then, we generate the target cutting behavior through GMR with the trained GMM. The target cutting behavior is transformed into a DMP system in this work to generate the target cutting behavior (position, orientation, and feedforward cutting force) of the robot end effector in the base coordinate system. Finally, we input the target cutting behavior to the IVAC module to control the joint angles of the robot during cutting to achieve human-like flexible cutting.

2.1.3. Experiment Platform

As is shown in Figure 2, the experiment platform includes a loading device, a cutting tool, a visual motion capture system, and a multi-degree-of-freedom robot arm. The cutting tool is constructed by connecting a meat separation blade with a six-axis F/T sensor using a metal frame. The ArUco augmented reality marker [24] is fixed to the left side of the cutting tool, and two intel RealSense D435i cameras track its pose to form a visual motion capture system. The D435i cameras detect the ArUco marker at a frequency of 30 Hz. They verify the marker’s encoding ID and utilize the corner detection tool in OpenCV to determine the marker’s position and orientation in the camera’s coordinate system. This process enables the camera to track the marker accurately. The marker’s ID is 123. This unique ID distinguishes and verifies other QR codes within view. The marker is 60 mm × 60 mm, ensuring clear visibility from a preset distance. The marker is a finely machined anodized aluminum sheet with distinct corners and low reflection. Meanwhile, the F/T sensor measures the decoupled forces and torques in the x, y, and z axes.
The F/T sensor has flanges at the front and rear; the front flange is connected to the metal frame of the cutting tool, and the rear flange can be connected with the flange of the robot end or with a 3D-printed grip. When the rear flange connects with the grip, the demonstrator holds the cutting tool by the grip to cut. Otherwise, the robot arm manipulates the cutting tool to cut.

2.2. Data Acquisition and Preprocessing

2.2.1. Data Acquisition

This study collects the cutting behavior and force data in four demonstrating scenarios to construct the datasets. The experiment was conducted in the College of Engineering, China Agricultural University with indoor white light and at a room temperature of 17 °C. The experiment materials include the sheep meat from the local (Beijing) meat plants and foam blocks. The raw data are calibrated, aligned, and then utilized to train the GMM-GMR model. To ensure the robot learns the exact human behaviors, this study only uses the actual demonstrations from real humans, and no data augmentation methods are employed. The datasets include 9570 data points (2348 from scenario 1, 884 from scenario 2, 1358 from scenario 3, and 4980 from scenario 4) from 38 demonstration samples (4 from scenario 1, 2 from scenario 2, 2 from scenario 3, and 30 from scenario 4). Scenario 1 is cutting the medial hind-leg muscles attached with the ischium ventral; 2 is cutting the lateral hind-leg muscles attached with the ischium dorsal; 3 is cutting the lateral hind-leg muscles attached with the ilium dorsal; 4 is to segment a foam block pasted on a rigid surface.

2.2.2. Data Calibration and Filtering

The raw data are calibrated to obtain the transform matrices between the robot base coordinate system W, the robot end mounting flange center coordinate system E, the F/T sensor coordinate system S, the marker coordinate system T, the Camera 1 coordinate system C 1 , and the Camera 2 coordinate system C 2 . T W E represents the transform matrix of W and E. T E S represents the transform matrix of E and S. T C 1 T represents the transform matrix of C 1 and T. T C 2 T represents the transform matrix of C 2 and T. T W C 1 represents the transform matrix of W and C 1 . T W C 2 represents the transform matrix of W and C 2 . T T E represents the transform matrix of T and E. The data calibration process includes seven steps:
  • T W E is the homogeneous matrix representation of E’s pose in W. The pose is read from the law-level embedded controller of the robot. Since the end mounting flange and the F/T sensor are connected by a mechanical structure, T E S can be pre-obtained.
  • T C 1 T is the homogeneous matrix representation of T’s pose in C 1 , and T C 2 T is the homogeneous matrix representation of T’s pose in C 2 . T C 1 T and T C 2 T are provided by the ArUco marker detection algorithm. Furthermore, T W E , T C 1 T , and T C 2 T , T W C 1 , T W C 2 , T T E 1 , and T T E 2 are obtained through hand–eye calibration.
  • We convert the measured pose of T on C 1 and C 2 to the pose of E on W with Equation (2) using the calibrated T W C 1 , T W C 2 , T T E 1 , and T T E 2 ,
    T W E 1 = T W C 1 T C 1 T T T E 1 , T W E 2 = T W C 2 T C 2 T T T E 2 .
  • We rewrite the homogeneous matrices T W E 1 and T W E 2 into pose vectors and obtain the position x d and quaternion q d of the cutting tool in the world coordinate system by Kalman filtering the pose vectors from the two camera sources.
  • We convert the cutting forces measured on the coordinate system S to the coordinate system E by (3) using the calibrated T S E ,
    f e = Ad T S E T f s ,
    where f e denotes the cutting force on E, f s denotes the cutting force on S, and Ad T S E denotes the adjoint representation of T S E .
  • To ensure accurate cutting force measurements, we use a gravity-compensated calibration method [25] to eliminate the impact of the cutting tool’s self-weight on the collected data. The resulting cutting force is denoted as f d . The components of this force, f d , x , f d , y , and f d , z , represent different directions. f d , x points towards the side of the blade, f d , z points towards the tip, and f d , y points towards the edge, all within the blade plane.
  • Dynamic time warping (DTW) is utilized to align the sample time-series length in the same scenario in a Python environment.

2.2.3. Latent Space Representation of Dataset

In order to improve learning efficiency, this study utilizes the encoding of the high-dimension time series in the dataset into the one-dimension latent variable series Z cat . First, the samples in the dataset are concatenated, as shown in Equation (4),
H cat = [ H d , 1 , H d , 2 , , H d , L ] , T cat = [ T d , 1 , T d , 2 , , T d , L ] ,
where H cat denotes the concatenated series of cutting behavior. T cat denotes the concatenated series of time points. L denotes the number of samples. H d , 1 , H d , 2 , , H d , L are the samples. Second, we perform a principal components analysis (PCA) on H cat . The PCA extracts the correlation and importance information among different axes of the cutting tool in the latent variables. The feature vector with the largest eigenvalue obtained by PCA is as follows,
W pca = w pca , 1 , w pca , 2 , , w pca , 6 ,
where W pca denotes the feature vector with elements w pca , 1 , w pca , 2 , , w pca , 6 . Then, H cat is transformed into its latent variable representation using Equation (6),
Z cat = W pca H cat ,
where Z cat denotes the latent variable series with a length of L × M .

2.3. Learning of Cutting Behaviors with GMM and GMR

2.3.1. Learning from Multi-demonstrations of Cutting Behaviors with GMM

The GMM is a statistical model for representing a given population as several Gaussian-distributed subpopulations. Therefore, it is suitable for learning cutting behavior with multi-demonstrations from the same scenario [26]. In this study, the GMM is trained on the dataset Z cat , T cat , with the state vector consisting of the cutting behavior latent variable z cat and time t cat , as shown in Equation (7),
p ( ξ ) k = 1 K π k N k ( μ k , Σ k ) , with k = 1 K π k = 1 , ξ = t cat z cat , μ k = μ t , k μ z , k , Σ k = Σ t , k Σ tz , k Σ zt , k Σ z , k ,
where p ( ξ ) is the probability distribution of state vectors, and p ( ξ ) is the mixture of K multivariant Gaussians. N k ( μ k , Σ k ) represents the kth Gaussian with a center μ k , a covariance matrix Σ k , and a weight π k .
The values of π k , μ k , and Σ k are obtained by training on input data through the expectation maximization (EM) method. The EM algorithm iteratively improves the similarity between the GMM and the realistic distribution of data samples to achieve maximum likelihood estimation of the sample distribution. The iterative optimization process involves initialization, E-step, and M-step:
  • The mixture model consists of 10 Gaussians. To reduce the sensitivity of the EM algorithm to the selection of the initial values, we use k-means to initialize the centers and covariance matrices of the Gaussians. After initializing, we train the GMM with the dataset of the four cutting scenarios that were collected;
  • The E-step calculates the intermediate variable γ m , k using current μ k and Σ k , as is shown in Equation (8),
    γ m , k = π k N k ( ξ m | μ k , Σ k ) k = 1 K π k N k ( ξ m | μ k , Σ k ) .
    where γ m , k denotes the condition probability of ξ m generating from the kth Gaussian;
  • Then, the M-step updates π k , μ k , and Σ k using γ m , k calculated in the E-step, as is shown in Equation (9),
    π k new = m = 1 M γ m , k M , μ k new = m = 1 M γ m , k ξ m m = 1 M γ m , k , Σ k new = m = 1 M γ m , k ξ m μ k ξ m μ k T m = 1 M γ m , k ,
    where π k new denotes the weight after updating. μ k new denotes the center after updating. Σ k new denotes the covariance matrix after updating.

2.3.2. Robotic Cutting Behavior Generation with GMR

As is shown in Figure 3, this study employs GMR to generate the target latent space trajectory Z ref for the cutting behaviors. In Figure 3a–d, the green-colored dots represent the points in the datasets, and thick red lines represent regression trajectories generated by the GMR. The ellipses represent the Gaussians in the GMM. The radius and angle of every ellipse represent a Gaussian covariance matrix Σ k , and the crosses represent the mean values of the Gaussians. Figure 3e–h show the corresponding cutting paths. The latent variable trajectories are almost averaged out by the GMM-GMR model in the four scenarios. Significant differences are found in the GMM and GMR trajectories learned in the four scenarios, since the cutting paths vary.
The regression function is calculated using the following equation:
z ref ( t ) = μ ^ z ( t ) = k = 1 K w k ( t ) μ ^ z , k ( t ) ,
with,
μ ^ z , k ( t ) = μ z , k + Σ z , k Σ t , k 1 ( t μ t , k ) , w k ( t ) = π k N ( t | μ t , k , Σ t , k ) k = 1 K π k N ( t | μ t , k , Σ t , k ) ,
where z ref ( t ) denotes the regression value of the latent variable at time t. μ ^ z , k ( t ) denotes the regression value of μ z , k at time t, and w k denotes the weight. μ z , k , Σ t , k , and Σ z , k are parameters defined in Equation (7). In this study, the values of the latent variables were calculated using Formula (10) at each time point, and the reference cutting behavior time series Z ref in latent space was obtained,
Z ref = [ z ref ( t 1 ) , z ref ( t 2 ) , , z ref ( t M ) ] ,

2.3.3. Post-Processing

The latent variable series of cutting behavior obtained by GMR were post-processed to generate the reference cutting behavior time series. First, by decoding the sequence of latent variables in (12), the reference cutting behavior (positions, quaternions, and forces) time series is generated,
H ref = [ h ref ( t 1 ) , h ref ( t 2 ) , , h ref ( t M ) ] , H ref = W pca T · Z ref , h ref ( t i ) = W pca T · z ref ( t i ) , with i = 1 , , M ,
where the symbol H ref refers to the time series of reference cutting behaviors. Each element h ref ( t i ) is a cutting behavior vector in the form of Equation (1) at time t i . The symbol W pca refers to the eigenvector in Equation (5). The symbol z ref ( t i ) is a latent variable at time t i . The time series has a length of M.
According to Equation (1), the time series H ref of reference cutting behaviors is divided into X ref , Q ref , and F ref ,
X ref = [ x ref ( t 1 ) , x ref ( t 2 ) , , x ref ( t M ) ] , Q ref = [ q ref ( t 1 ) , q ref ( t 2 ) , , q ref ( t M ) ] , F ref = [ f ref ( t 1 ) , f ref ( t 2 ) , , f ref ( t M ) ] ,
where X ref denotes the reference tool position sequence, Q ref denotes the reference tool orientation sequence, and F ref denotes the reference cutting force sequence.
Furthermore, the first- and second-order derivatives of the reference positions and quaternions are calculated,
V ref = [ v ref ( t 1 ) , v ref ( t 2 ) , , v ref ( t M ) ] , V ˙ ref = [ v ˙ ref ( t 1 ) , v ˙ ref ( t 2 ) , , v ˙ ref ( t M ) ] , Ω ref = [ ω ref ( t 1 ) , ω ref ( t 2 ) , , ω ref ( t M ) ] , Ω ˙ ref = [ ω ˙ ref ( t 1 ) , ω ˙ ref ( t 2 ) , , ω ˙ ref ( t M ) ] ,
with
v ref ( t i ) = x ref ( t i ) x ref ( t i 1 ) t i t i 1 , i > 1 , v ˙ ref ( t i ) = v ref ( t i ) v ref ( t i 1 ) t i t i 1 , i > 1 , ω ref ( t i ) = d q ( q ref ( t i 1 ) , q ref ( t i ) ) t i t i 1 , i > 1 , ω ˙ ref ( t i ) = ω ref ( t i ) ω ref ( t i 1 ) t i t i 1 , i > 1 ,
where t i denotes the time with t 1 = 0 , v ref ( t 1 ) = 0 m / s , v ˙ ref ( t 1 ) = 0 m / s 2 , ω ref ( t 1 ) = 0 rad / s , and ω ˙ ref ( t 1 ) = 0 rad / s 2 . The unit quaternion representation is adopted to avoid the orientation singularity. d q ( q ref ( t i 1 ) , q ref ( t i ) ) denotes the orientation offset, which maps the quaternion offset into angular velocity space for derivation.

2.4. Admittance Control in Joint Space Coupling with DMPs

2.4.1. Dynamic Motion Primitives of Target Cutting Behaviors

(1) Activation Function: Like original DMPs, we convert time variables into phase space activation signals on 0 , 1 through an activation function. We take the Sigmoid system (17) with upper and lower limits as the activation function
τ s ˙ = α s s ( 1 s ) ,
where s is the phase space activation signal, and α s = 2 log ( 0.01 / 0.99 ) . s starts at 0.99 and decreases to 0.01 at t = τ to simulate neural signal activation and attenuation.
(2) Modified DMPs model: first, considering the transformation relationship between quaternions and angular velocities, we extend the model proposed by Hoffmann et al. [27] to the quaternion space and build the following improved DMP model (18) for reference cutting behavior,
τ 2 a d α d = K p K q s w p T g x + x 0 + 1 s ( x g x ) s w q T g d q ( q , q 0 ) + 1 s d q ( q g , q ) D p D q v d ω d ,
with
τ x ˙ q ˙ = v η * q / 2 , τ a d α d = v ˙ d ω ˙ d
where K p , K q , D p , D q are constant coefficients. τ is the time scaling factor. x denotes the end-effector position, x 0 the initialing position, and x g the final position. q denotes the quaternion of the end effector’s orientation, q 0 the initialing quaternion, and q g the final quaternion. v denotes the velocity of the end effector and ω the angular velocity. a d denotes the acceleration of the end effector and α d the angular acceleration. η is a quaternion with a scalar part of 0 and an imaginary part of ω . d q : R 4 so ( 3 ) is the logarithmic mapping of the quaternion deviation to angular velocity space, as defined in [28]. g denotes the radial basis vector, and w p and w q are the parameter matrices of N × 3 . The n th element of g is defined as follows
g n = ϕ n ( s ) n = 1 N ϕ n ( s ) ,
with,
ϕ n ( s ) = exp ( h n ( s c n ) 2 ) ,
where ϕ n ( s ) denotes the n th Gaussian radial basis function, h n the width of this basis, and c n the center of this basis.
The Equation (18) defines a virtual critical damping system with attraction points [ x g , q g ] T and [ w p T g , w q T g ] T . [ x g , q g ] T converges the system towards the final position and [ w p T g , w q T g ] T adjusts the shape of the motion trajectory. The importance of [ x g , q g ] T and [ w p T g , w q T g ] T to the system is weighted by 1 s and s. When t is small, the shape of the generated trajectory is more important, and as s trends toward 0, convergence to the final position [ x g , q g ] T becomes more important.
(3) Learning Parameters: Y p and Y q are the targets derived from the demonstration trajectory time series,
Y p = [ y p ( t 1 ) , y p ( t 2 ) , , y p ( t M ) ] , Y q = [ y q ( t 1 ) , y q ( t 2 ) , , y q ( t M ) ] ,
with
y p ( t ) = s [ x ref ( t ) x ref ( t 1 ) ] + ( 1 s ) [ x ref ( t M ) x ref ( t ) D p v ref ( t ) K p a ref ( t ) K p , y q ( t ) = s d ( q ref ( t ) , q ref ( t 1 ) ) + 1 s d ( q ref ( t M ) , q ref ( t ) ) D q ω ref ( t ) K q α ref ( t ) K q .
The reference trajectory and feedforward force jointly describe the cutting behavior. The parameter matrices of the nonlinear terms w p and w q are the representations of the cutting behavior in the parameter space.
w p and w q are learned from the demonstration data by solving the optimization problems (24),
min i = 1 M w p T g ( s i ) s i y p ( t i ) , min i = 1 M w q T g ( s i ) s i y q ( t i ) ,
where, at each time step, i, s i represents the activation signal, and g ( s i ) is the corresponding radial basis vector. The parameter matrices to be solved are denoted as w p and w q . We utilized the locally weighted linear regression (LWR) method to solve Equation (24) in this study.

2.4.2. Robot Joints Admittance Control

In this study, we implement the proposed algorithm on an Aubo-i7 robot arm (Shanghai, China). The arm has six joints. The controller interacts with the low-level embedded motion controller of the robot to set and read the joints’ position. The interaction is through robot operating system (ROS) software architecture and TCP-CAN transparent transmission at 50 Hz. Regarding the robot body as rigid, we design the following admittance control law (25) to the joints’ motion for safe and effective cutting.
K j ( θ m θ r ) D j ( θ ˙ m θ ˙ r ) = Δ τ ,
where k j is the joint stiffness coefficient matrix and d j the joint damping coefficient matrix. θ m is the command positions of the joints. δ τ is the joints’ torque offsets. We conduct the following transformation (26) to transform the cutting force offsets in the coordinate system of the end effector into the joints’ torque offsets
Δ τ = J T ( F fb F ff ) ,
where, F ff is the feedforward force, F fb is the measured feedback force. K j and D j are calculated by Equations (27) and (28)
K j = J T Γ KJ ,
D j = J T Γ 1 2 D J ,
where D = 2 K M , M = m 1 0 0 m 6 , K = k 1 0 0 k 6 , and Γ = γ 1 0 0 γ 6 . k is the stiffness coefficient matrix on the end effector coordinates, and d is the damping coefficient matrix on the end effector coordinates. Γ is the stiffness adjustment factor matrix, and its diagonal elements γ 1 , , γ 6 are the stiffness adjustment factors in the direction of the corresponding axis of the end effector. The initial values of γ 1 , , γ 6 are all 1. The stiffness and damping of the admittance control in different directions can be set separately by changing γ 1 , , γ 6 .
The admittance control converts the end effector force offsets into the angular velocity of the robot’s joints. Based on the Equations (25)–(28), we propose the following update law (29) and (30) of the velocities and angles of the joints
θ ˙ m θ ˙ r + ( J T D J ) 1 J T ( F fb F ff ) J T KJ ( θ m θ r ) ,
θ m θ m + θ ˙ m d t ,
After that, the joints’ angles are sent to the low-level embedded motion controller of the robot for execution. This paper does not include a discussion of the low-level embedded motion controller.

2.4.3. Inverse Velocity Motion Control

The robot updates the reference pose and feedforward force each time step during cutting. First, it updates v ˙ d and ω ˙ d using Formula (18). Moreover, v d and ω d are updated via numerical integrals (31)
v d ω d v d ω d + v ˙ d ω ˙ d Δ t ,
Then, the reference position x d and the reference pose q d are updated
x d x + v d Δ t τ
q d exp q ( ω d Δ t 2 τ ) * q .
To achieve continuous tracking of the end effector’s motion and avoid joint angle and velocity limits, we need to solve for the inverse kinematics [29,30,31] and obtain the command joint angles θ r , as described in Equation (25). In detail, we use the instantaneous kinetic energy of the robot joints as the cost function. The constraints we take into account include the first-order forward kinematics of the robot and the physical limitations of the joint angles and velocities. The joint velocities are then obtained by solving the following optimization problem
min θ ˙ r θ ˙ r T M θ ˙ r ,
s . t . J θ ˙ r = [ v r , ω r ] T ,
θ r min θ r θ r max ,
θ ˙ r min θ ˙ r θ ˙ r max .
where M is the inertial matrix. Equation (34) is the instantaneous kinetic energy of the joint to be optimized. Equation (35) is the forward kinematic constraint. Equation (36) is the constraint on the physical position of the joint angle. Equation (37) is the physical velocity constraint of the joint. The solution to the above optimization problem is
θ ˙ o p t = J [ v d , ω d ] T .
In Equation (38), the pseudo-inverse J of the joint Jacobi J is calculated as
J = M 1 J T ( J M 1 J T ) .
In this study, we have introduced compensation in Formula (38) to eliminate the cumulative error of numerical calculation. It ensures that the robot tracks the motions generated by the modulated DMPs. The final calculation Formula (40) of θ ˙ r is as follows:
θ ˙ r = θ ˙ o p t + J log 6 ( X r 1 X d ) ,
where X r is the homogeneous matrix of the end effector pose at θ r derived from the forward kinematics of the robot. X d is the homogeneous matrix of the expected end effector pose, which corresponds to x d , q d . log 6 : SE ( 3 ) R 6 is the logarithmic mapping of the robot’s homogeneous matrix to the twist in motion.

3. Results and Discussion

3.1. Human–Robot Skill Transfer

We train the proposed method on the datasets mentioned in Section 2.2 to demonstrate its effectiveness and applicability in learning and generating soft tissue cutting behaviors.
It can be observed from Figure 4 that the PCA eigenvector heat maps vary remarkably in scenarios 1, 2, 3, and 4. This suggests that the latent space variables in Equation (6) are able to effectively identify the cutting behaviors of four scenarios, although we use PCA to calculate one-dimensional latent space variables for simplicity.
Table 1 shows the explanation rates of GMM, axis weights of PCA, and pose reconstruction errors of GMR in four scenarios. We find in Table 1 that Scenario 1 has the smallest difference in axis weights, indicating a more complex contact situation. In this scenario, there are both contacts between the side of the cutting blade and the ischial surface and contact between the cutting blade and the ischial surface. Moreover, it can be observed from Table 1 that w 2 is small among the four scenarios. This indicates that when the side of the cutting knife is in contact with the rigid medium, the large elastic deformation may reduce the measured value of the cutting force in this direction. From the perspective of the interpretation rate, on the one hand, the interpretation rates of Scenarios 4 and 2 are 74.71% and 80.96%, indicating that in these two scenarios, the GMM-GMR module in this work can effectively represent the cutting behavior in the latent space. On the other hand, the explanation rates of scenarios 1 and 3 are nearly 50.00%, indicating that in these two scenarios, the effectiveness of latent space variables in representing cutting behavior can be further improved.
In addition, in Scenarios 2, 3, and 4, the position reconstruction error is less than 0.05 m, the position reconstruction error of Scenario 3 is close to 0.05 m, and in the four scenarios, the attitude reconstruction error is less than 0.05 rad. Considering the jitter and operating accuracy during human demonstrations, this suggests that the latent space GMM-GMR module can effectively generate the target cutting behaviors for the robot in all the four scenarios.

3.2. Foam Cutting Test

We carry out a foam-cutting experiment to verify the reliability and effectiveness of the proposed framework in adapting to the position changes of the rigid medium under soft tissue. The robot uses the learned parameters and fixed stiffness coefficient to perform one cutting each at the loading platform heights of +30 mm, +20 mm, +10 mm, 0 mm, −10 mm, −20 mm, and −30 mm. The to-be-cut soft tissue (a foam block) is pasted on the surface of the rigid medium, which is a plastic pad used for food processing. We fix the plastic pad on the loading robot platform, and the loading robot platform can adjust the height of the plastic pad by itself. The size of the foam block used is 60 × 50 × 50 mm, and the height of the plastic backing plate at the beginning of the test is 0 mm. As is shown in the Figure 5, the cutting process can be divided into four phases, i.e., entering, cutting, releasing, and returning.
As shown in Figure 6, in each test, the robot successfully separates the foam. The first phase is 0–5.0 s, the second phase is 5.0–10.5 s, the third phase is 10.5–15.0 s, and the fourth phase is 15.0–20.0 s. In the first phase, the main contact direction is the Z-axis. The direction of F x is negative, the direction of F y is positive, and the direction of F z is negative. On the one hand, the changes in F x and F y are small, falling from 0 N to approximately −5 N and rising from 0 N to approximately 5 N, respectively. Both F x and F y are within the safe range of the force sensor and robot payload. On the other hand, F z decreases rapidly. The second phase is the cutting phase. F y rises from approximately 5 N to 25 N and maintains the great pressure within the safety range of the force sensor and robot load. In this phase, F y is mainly used to apply pressure on the plastic plate to ensure effective cutting of the thin layer in the boundary between the foam and the plastic plate and to compensate for the resistance of the plastic backing plate to the cutting tool. F x further dropped from −5 N to −10 N. T y and F x have similar trends. Both F x and T y are close to the data during human demonstration. F x and T y are mainly derived from the demonstration. When demonstrating, the plane of the blade and the plastic plate are not perpendicular. The third stage is the release stage. The cutting force of each axis gradually returns to 0 N. From the perspective of motion, there is only a minor deviation between the test trajectories and the demonstration trajectories. This shows that after the cutting blade leaves the plastic plate and foam, the IVAC algorithm can effectively adjust the motion trajectories and reduce the offset between the target trajectory and the actual trajectory.
Table 2 presents the maximum, minimum, and average forces on each axis of the force sensor during robot cutting at different platform heights, as well as the reference values measured during human teaching. When the platform height is lower than −10 mm, its amplitude is close to the human demonstration. As the platform height increases, The amplitude of F z is also larger. However, in each test, F z is still within the safe range of the force sensor and robot load.
The above results suggest that the proposed IVAC algorithm can adapt to different platform heights quickly by changing its motion trajectory and has a wide adjustment range. The proposed framework can ensure that the cutting force is within a safe range, which is suitable for cutting when the position of the rigid medium under soft tissue is uncertain. It can also be found from the results that the cutting force errors of the same axis are quite different at different execution phases. It suggests that the optimal stiffness differs from phases and axes during execution. Follow-up research can consider the optimization of the stiffness coefficients of IVAC through reinforcement learning or variable admittance control.

3.3. Sheep Hindquarters Separation Test

This study carries out a sheep hindquarter separation test to evaluate the effectiveness of the proposed framework in the meat cutting scenario. The separation processes of the left hind leg of the hindquarter are selected as the test scenarios. In the test, the robot needed to cut the leg muscles along the hip bone (ilium and ischium) of the hindquarter to separate the left hind-leg from the hip bone. To complete the separation, the robot uses the proposed IVAC algorithm and the learned skill to perform cutting in Scenarios 1–3 sequentially. We prepare two sheep hindquarters for testing.
The separation effects are shown in Figure 7; the robot successfully separates the hip bone and hind-leg of the sheep carcass. The hind-leg of the divided sheep carcass is relatively complete, and there is only a little meat left on the hip bone. Moreover, no bone cracks appeared during the separation processes, and there is no collision accident between the cutting tool and the bone. This indicates that the cutting by the robot is compliant and accurate.
Table 3 shows the meat residual rate (RR), maximum residual meat thickness, and maximum absolute value of cutting force. The maximum residual meat thickness is used to characterize the adaptive performance of IVAC. The smaller the maximum residual meat thickness, the better the adaptive performance of IVAC when there are differences in shape and size between the test sample and the teaching sample. The maximum absolute value of the cutting force reflects the cutting resistance under the premise of completing the cutting. The smaller the value, the higher the cutting efficiency. Moreover, RR is defined as the ratio of the weight of the residual hind leg meat on the hip bone of the sheep carcass to the total weight of the separated hind-leg. RR evaluates the loss of high-value leg meat during the cutting process. The smaller the value, the less the loss. RR is calculated with Equation (41)
R R = 2.0 × m res m res + m leg × 100 % ,
where R R denotes the residual rate, m res denotes the weight of the residual hind-leg meat on the hip bone, and m leg denotes the weight of the separated hind-leg.
According to Table 3, the maximum residual meat thickness of sample 1 is 3.1 mm and sample 2 is 3.8 mm, both of which are less than 5 mm. The segmentation residual rates of samples 1 and 2 are 5.6% and 4.8%. The results indicate that the proposed method can segment sheep carcasses well and has good generalization and effectiveness. In addition, the maximum residual meat thickness and segmentation residual rate obtained in the experiment are small, indicating that the segmentation accuracy is high enough to segment the hind-legs of sheep carcasses. The last four rows of Table 3 are the absolute maximum value of the cutting force in the directions of F x , F y , F z , and T x . It can be observed from Table 3 that the largest absolute values of the cutting force in each axis of samples 1 and 2 under the three scenarios are all within the safe range of the robot and the force sensor. This suggests that the proposed method can not only ensure that the cutting depth and residual rate meet the process requirements but also avoid excessive cutting force through adaptation. The proposed framework can realize the human–robot skill transfer in the sheep hindquarters separation test.
Figure 8 shows the cutting forces and torque curves of cutting sample 1 to illustrate the similarity between robot cutting and the human demonstration. The first column of Figure 8 shows the results of cutting scenario 1. (a) The robotic cutting force tendency of each axis is consistent with the reference curve measured in the human demonstration, indicating that the robot can imitate the human cutting behavior well. (b) The robot cutting force measured is close to the humans demonstration. (c) The maximum robot cutting force in the F z direction (200 steps) is greater than the human cutting force. However, it is still within the safe range of the robot and force sensor. This suggests that the proposed method can perform cutting efficiently and avoid excessive cutting force.
The values in the second column of Figure 8 are curves in scenario 2. Meanwhile, the tendency and amplitude of the robot’s cutting force F y , F z , T x are very close to that of human cutting. As is shown in the third column of Figure 8, the tendency and amplitude of the robot cutting force are near human, and the vibration of robot cutting is greater in scenario 3. Additionally, Figure 8 shows that the total separation time of the hind-leg is nearly 30 s. This suggests that the proposed method has practical efficiency compared with manual cutting.

4. Conclusions

The meat industry requires automation due to labor shortages and inefficiencies. Traditional 2D methods are limited when cutting complex 3D anatomy structures, such as sheep hindquarters. Compliant cutting in a 3D space can precisely separate the meat tissue from the bones but lacks flexibility. A multi-demonstration HRST framework is proposed to address these challenges, reduce cutting force, and minimize meat loss. This method potentially benefits flexible and generalized robot skill learning in interaction tasks, which provides an alternative approach for harvesting high-value cuts and increasing yields in meat industries. Multi-demonstrations of cutting are encoded through the latent space GMM-GMR module to generate the target cutting behaviors for robot, which enables the generalization to meat parts and sizes. The IVAC-DMP control provides the flexibility of modulating robot joint motion during cutting and avoids excessive cutting forces.
The experimental results show that the latent space GMM-GMR can effectively extract cutting skills from multi-demonstrations. The foam-cutting test results show that the IVAC module can adapt to the position changes of the rigid medium under soft tissue reliably and efficiently. The sheep hindquarters separation test result shows that the maximum residual meat thickness of sample 1 and 2 are 3.1 mm and 3.8 mm. The residual rates of samples 1 and 2 are 5.6% and 4.8%. Both meet the requirements of the sheep hindquarter separation.
There are several limitations to the proposed framework. First, the current PCA representation of cutting behavior may not be completely accurate. In the future, it may be beneficial to use models with stronger representation capabilities, e.g., variational autoencoder to improve the accuracy of the latent space representation. Second, the demonstrating scenarios used in this study are limited to cutting sheep hindquarters and foam blocks. To extract more generalizable semantic cutting skills, future studies could apply the framework to broader tissue-cutting tasks by training with larger-scale datasets from various meat pieces over longer epochs. Finally, the IVAC module utilized fixed admittance parameters. To improve interaction flexibility, future studies could apply variable AC or IC to provide optimal stiffness at different stages of the cutting process and in different cutting directions.

Author Contributions

Conceptualization, K.L. and B.X.; methodology, K.L. and B.X.; software, K.L.; validation, K.L. and Z.C.; formal analysis, K.L.; investigation, K.L. and Z.C.; resources, Z.C.; data curation, Z.C.; writing—original draft preparation, K.L.; writing—review and editing, K.L., B.X., Z.L. and Z.G.; visualization, K.L. and S.J.; supervision, B.X.; project administration, B.X.; funding acquisition, B.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Plan of China (grant number 2018YFD0700804).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HRSTHuman–robot skill transfer
ACAdmittance controller
ICImpedance controller
IVInverse velocity
IKSInverse kinematics solving
IVACInverse velocity admittance control
DMPDynamic movement primitive
CMPCompliant movement primitive
DLBDementia with Lewy bodies
GMMGaussian mixture model
GMRGaussian mixture regression
PCAPrincipal components analysis
LWRLocally weighted linear regression
ROSRobot operating system
MLAMeat and Livestock Australia
AMPCAustralian Meat Processor Corporation
EMExpectation maximization
DTWDynamic time warping
CMPCompliant movement primitive
SEDSStable Estimator of Dynamical Systems
DMRIDanish Meat Research Institute
sEMGSurface electromyography

References

  1. Li, J.; Xie, B.; Zhai, Z.; Zhang, P.; Hou, S. Research progress of intelligent equipment and technology for livestock and poultry slaughter and processing. Food Mach. 2021, 37, 226–232. [Google Scholar] [CrossRef]
  2. Xu, W.; He, Y.; Li, J.; Zhou, J.; Xu, E.; Wang, W.; Liu, D. Robotization and Intelligent Digital Systems in the Meat Cutting Industry: From the Perspectives of Robotic Cutting, Perception, and Digital Development. Trends Food Sci. Technol. 2023, 135, 234–251. [Google Scholar] [CrossRef]
  3. Arvidsson, I.; Balogh, I.; Hansson, G.Å.; Ohlsson, K.; Åkesson, I.; Nordander, C. Rationalization in Meat Cutting—Consequences on Physical Workload. Appl. Ergon. 2012, 43, 1026–1032. [Google Scholar] [CrossRef]
  4. Echegaray, N.; Hassoun, A.; Jagtap, S.; Tetteh-Caesar, M.; Kumar, M.; Tomasevic, I.; Goksen, G.; Lorenzo, J.M. Meat 4.0: Principles and Applications of Industry 4.0 Technologies in the Meat Industry. Appl. Sci. 2022, 12, 6986. [Google Scholar] [CrossRef]
  5. Kim, J.; Kwon, Y.K.; Kim, H.W.; Seol, K.H.; Cho, B.K. Robot Technology for Pork and Beef Meat Slaughtering Process: A Review. Animals 2023, 13, 651. [Google Scholar] [CrossRef] [PubMed]
  6. Hinrichsen, L. Manufacturing Technology in the Danish Pig Slaughter Industry. Meat Sci. 2010, 84, 271–275. [Google Scholar] [CrossRef] [PubMed]
  7. Guire, G.; Sabourin, L.; Gogu, G.; Lemoine, E. Robotic Cell for Beef Carcass Primal Cutting and Pork Ham Boning in Meat Industry. Ind. Robot. 2010, 37, 532–541. [Google Scholar] [CrossRef]
  8. Li, Z.; Wang, S.; Zhao, S.; Bai, Y. Cutting Methods of Sheeps Trunk Based on Improved DeepLabv3+ and XGBoost. Comput. Eng. Appl. 2021, 57, 263–269. [Google Scholar]
  9. Khodabandehloo, K. Achieving Robotic Meat Cutting. Anim. Front. 2022, 12, 7–17. [Google Scholar] [CrossRef] [PubMed]
  10. Xie, B.; Jiao, W.; Wen, C.; Hou, S.; Zhang, F.; Liu, K.; Li, J. Feature Detection Method for Hind Leg Segmentation of Sheep Carcass Based on Multi-Scale Dual Attention U-Net. Comput. Electron. Agric. 2021, 191, 106482. [Google Scholar] [CrossRef]
  11. Australia, M.L. Automated Forequarter Cell Installation for Lamb [EB/OL]. Available online: https://www.mla.com.au/research-and-development/reports/2023/automated-forequarter-cell-installation-for-lamb/ (accessed on 29 November 2023).
  12. AMPC. First Prototype Automation for Deboning Lamb Shoulder Stage 2 [EB/OL]. Available online: https://ampc.com.au/research-development/advanced-manufacturing/first-prototype-automation-for-deboning-lamb-shoulder-stage-2 (accessed on 29 November 2023).
  13. Nabil, E.; Belhassen-Chedli, B.; Grigore, G. Soft Material Modeling for Robotic Task Formulation and Control in the Muscle Separation Process. Robot. Comput. Integr. Manuf. 2015, 32, 37–53. [Google Scholar] [CrossRef]
  14. Maithani, H.; Corrales Ramon, J.A.; Lequievre, L.; Mezouar, Y.; Alric, M. Exoscarne: Assistive Strategies for an Industrial Meat Cutting System Based on Physical Human-Robot Interaction. Appl. Sci. 2021, 11, 3907. [Google Scholar] [CrossRef]
  15. Zeng, C.; Yang, C.G.; Li, Q.; Dai, L. Research Progress in Human-robot Skill Transfer. Acta Autom. Sin. 2019, 45, 16. [Google Scholar]
  16. Burdet, E.; Osu, R.; Franklin, D.W.; Milner, T.E.; Kawato, M. The Central Nervous System Stabilizes Unstable Dynamics by Learning Optimal Impedance. Nature 2001, 414, 446–449. [Google Scholar] [CrossRef] [PubMed]
  17. Zeng, C.; Su, H.; Li, Y.; Guo, J.; Yang, C. An Approach for Robotic Leaning Inspired by Biomimetic Adaptive Control. IEEE Trans. Ind. Inform. 2022, 18, 1479–1488. [Google Scholar] [CrossRef]
  18. Li, Y.; Ganesh, G.; Jarrassé, N.; Haddadin, S.; Albu-Schaeffer, A.; Burdet, E. Force, Impedance, and Trajectory Learning for Contact Tooling and Haptic Identification. IEEE Trans. Robot. 2018, 34, 1170–1182. [Google Scholar] [CrossRef]
  19. Gams, A.; Nemec, B.; Ijspeert, A.J.; Ude, A. Coupling Movement Primitives: Interaction With the Environment and Bimanual Tasks. IEEE Trans. Robot. 2014, 30, 816–830. [Google Scholar] [CrossRef]
  20. Kramberger, A.; Shahriari, E.; Gams, A.; Nemec, B.; Haddadin, S. Passivity Based Iterative Learning of Admittance-Coupled Dynamic Movement Primitives for Interaction with Changing Environments. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar]
  21. Xie, B.; Jiao, W.; Liu, K.; Wu, J.; Wen, C.; Chen, Z. Adaptive Segmentation Control Method of Sheep Carcass Hind Legs Based on Contact State Perception. Trans. Chin. Soc. Agric. Mach. 2023, 54, 306–315. [Google Scholar] [CrossRef]
  22. Gams, A.; Ude, A.; Petric, T.; Denisa, M. Learning Compliant Movement Primitives Through Demonstration and Statistical Generalization. IEEE/ASME Trans. Mechatron. 2016, 21, 2581–2594. [Google Scholar]
  23. Wu, R.; Billard, A. Learning From Demonstration and Interactive Control of Variable-Impedance to Cut Soft Tissues. IEEE/ASME Trans. Mechatron. 2022, 27, 2740–2751. [Google Scholar] [CrossRef]
  24. Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F.; Medina-Carnicer, R. Generation of Fiducial Marker Dictionaries Using Mixed Integer Linear Programming. Pattern Recognit. 2015, 51, 481–491. [Google Scholar] [CrossRef]
  25. Zhang, L.; Hu, R.; Yi, W. Research on Force Sensing for the End-load of Industrial Robot Based on a 6-Axis Force/Torque Sensor. Acta Autom. Sin. 2017, 43, 439–447. [Google Scholar] [CrossRef]
  26. Hersch, M.; Guenter, F.; Calinon, S.; Billard, A. Dynamical System Modulation for Robot Learning via Kinesthetic Demonstrations. IEEE Trans. Robot. 2008, 24, 1463–1467. [Google Scholar] [CrossRef]
  27. Hoffmann, H.; Pastor, P.; Park, D.H.; Schaal, S. Biologically-Inspired Dynamical Systems for Movement Generation: Automatic Real-Time Goal Adaptation and Obstacle Avoidance. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 2587–2592. [Google Scholar] [CrossRef]
  28. Abu-Dakka, F.J.; Nemec, B.; Jørgensen, J.A.; Savarimuthu, T.R.; Krüger, N.; Ude, A. Adaptation of Manipulation Skills in Physical Contact with the Environment to Reference Force Profiles. Auton. Robot. 2015, 39, 199–217. [Google Scholar] [CrossRef]
  29. Cao, P.; Gan, Y.; Dai, X.; Duan, J. Convex Optimization Solution for Inverse Kinematics of a Physically Constrained Redundant Manipulator. Robot 2016, 38, 257–264. [Google Scholar] [CrossRef]
  30. He, W.; Xue, C.; Yu, X.; Li, Z.; Yang, C. Admittance-Based Controller Design for Physical Human—Robot Interaction in the Constrained Task Space. IEEE Trans. Autom. Sci. Eng. 2020, 17, 1937–1949. [Google Scholar] [CrossRef]
  31. Yamane, K. Admittance Control With Unknown Location of Interaction. IEEE Robot. Autom. Lett. 2021, 6, 4079–4086. [Google Scholar] [CrossRef]
Figure 1. Diagram of the human–robot skill transfer and control framework.
Figure 1. Diagram of the human–robot skill transfer and control framework.
Agriculture 14 00394 g001
Figure 2. Experiment Platform. (a) the cutting tool is manipulated by the robot, (b) the cutting tool is manipulated manually, (c) the set of robotic cutting, (d) the set of demonstration. The cutting tool is manipulated manually first to sample multiple demonstrations. Then, the robot is trained and reproduces the same cutting behavior.
Figure 2. Experiment Platform. (a) the cutting tool is manipulated by the robot, (b) the cutting tool is manipulated manually, (c) the set of robotic cutting, (d) the set of demonstration. The cutting tool is manipulated manually first to sample multiple demonstrations. Then, the robot is trained and reproduces the same cutting behavior.
Agriculture 14 00394 g002
Figure 3. GMM-GMR models and generated latent variable trajectories in 4 scenarios. (a) the learned Gaussian components of GMM and the trajectory generated by GMR in Scenario 1, (b) the learned Gaussian components of GMM and the trajectory generated by GMR in Scenario 2, (c) the learned Gaussian components of GMM and the trajectory generated by GMR in Scenario 3, (d) the learned Gaussian components of GMM and the trajectory generated by GMR in Scenario 4, (e) the medial hind-leg muscles cutting path in Scenario 1, (f) the lateral hind-leg muscles cutting path in Scenario 2, (g) the lateral hind-leg muscles cutting path in Scenario 3, (h) the foam cutting path in Scenario 4.
Figure 3. GMM-GMR models and generated latent variable trajectories in 4 scenarios. (a) the learned Gaussian components of GMM and the trajectory generated by GMR in Scenario 1, (b) the learned Gaussian components of GMM and the trajectory generated by GMR in Scenario 2, (c) the learned Gaussian components of GMM and the trajectory generated by GMR in Scenario 3, (d) the learned Gaussian components of GMM and the trajectory generated by GMR in Scenario 4, (e) the medial hind-leg muscles cutting path in Scenario 1, (f) the lateral hind-leg muscles cutting path in Scenario 2, (g) the lateral hind-leg muscles cutting path in Scenario 3, (h) the foam cutting path in Scenario 4.
Agriculture 14 00394 g003
Figure 4. PCA Feature Matrix: (a) heat map of PCA eigenvectors in scenario 1, (b) heat map of PCA eigenvectors in scenario 2, (c) heat map of PCA eigenvectors in scenario 3, (d) heat map of PCA eigenvectors in scenario 4.
Figure 4. PCA Feature Matrix: (a) heat map of PCA eigenvectors in scenario 1, (b) heat map of PCA eigenvectors in scenario 2, (c) heat map of PCA eigenvectors in scenario 3, (d) heat map of PCA eigenvectors in scenario 4.
Agriculture 14 00394 g004
Figure 5. Scheme of foam cutting process. (a) The cutting tool cuts into the foam toward the front and comes into contact with the plastic plate in the entering phase. (b) The robot then splits the foam from front to back in the cutting phase. (c) After cutting, the blade leaves the plastic plate and foam. (d) Finally, the robot returns to the start position and orientation through free movement. (e) frame 4 zoomed in showing the cutting trajectory.
Figure 5. Scheme of foam cutting process. (a) The cutting tool cuts into the foam toward the front and comes into contact with the plastic plate in the entering phase. (b) The robot then splits the foam from front to back in the cutting phase. (c) After cutting, the blade leaves the plastic plate and foam. (d) Finally, the robot returns to the start position and orientation through free movement. (e) frame 4 zoomed in showing the cutting trajectory.
Agriculture 14 00394 g005
Figure 6. Robot cutting force and motion. (a) force in X axis, (b) torque around X axis, (c) position in X axis, (d) rotation angle around X axis, (e) force in Y axis, (f) torque around Y axis, (g) position in Y axis, (h) rotation angle around Y axis, (i) force in Z axis, (j) torque around Z axis, (k) position in Z axis, (l) rotation angle around Z axis. ‘mean’ denotes the average curves of cutting forces, torques, positions, and orientations. ‘learned’ denotes the desired curves of cutting forces, torques, positions, and orientations learned from the demonstration. The first and second columns show the robot cutting forces and torques at different platform heights. The third and fourth columns show the position and orientation of the robot end effector at different platform heights.
Figure 6. Robot cutting force and motion. (a) force in X axis, (b) torque around X axis, (c) position in X axis, (d) rotation angle around X axis, (e) force in Y axis, (f) torque around Y axis, (g) position in Y axis, (h) rotation angle around Y axis, (i) force in Z axis, (j) torque around Z axis, (k) position in Z axis, (l) rotation angle around Z axis. ‘mean’ denotes the average curves of cutting forces, torques, positions, and orientations. ‘learned’ denotes the desired curves of cutting forces, torques, positions, and orientations learned from the demonstration. The first and second columns show the robot cutting forces and torques at different platform heights. The third and fourth columns show the position and orientation of the robot end effector at different platform heights.
Agriculture 14 00394 g006
Figure 7. Effects of hindquarter separation. (a) Scenario 1: cutting the medial hind-leg muscles attached with the ischium ventral. (b) Scenario 2: cutting the lateral hind-leg muscles attached with the ischium dorsal. (c) Scenario 3: cutting the lateral hind-leg muscles attached with the ilium dorsal. The yellow-colored area represents the cut muscles, and the blue-colored area represents the ischium bone.
Figure 7. Effects of hindquarter separation. (a) Scenario 1: cutting the medial hind-leg muscles attached with the ischium ventral. (b) Scenario 2: cutting the lateral hind-leg muscles attached with the ischium dorsal. (c) Scenario 3: cutting the lateral hind-leg muscles attached with the ilium dorsal. The yellow-colored area represents the cut muscles, and the blue-colored area represents the ischium bone.
Agriculture 14 00394 g007
Figure 8. Cutting force and torque variation curve of cutting sample 1. (a) F x in Scenario 1, (b) F x in Scenario 2, (c) F x in Scenario 3, (d) F y in Scenario 1, (e) F y in Scenario 2, (f) F y in Scenario 3, (g) F z in Scenario 1, (h) F z in Scenario 2, (i) F z in Scenario 3, (j) T x in Scenario 1, (k) T x in Scenario 2, (l) T x in Scenario 3, (m) T y in Scenario 1, (n) T y in Scenario 2, (o) T y in Scenario 3, (p) T z in Scenario 1, (q) T z in Scenario 2, (r) T z in Scenario 3. The first column shows the results of Scenario 1. The second column shows the results of Scenario 2. The third column shows the results of Scenario 3.
Figure 8. Cutting force and torque variation curve of cutting sample 1. (a) F x in Scenario 1, (b) F x in Scenario 2, (c) F x in Scenario 3, (d) F y in Scenario 1, (e) F y in Scenario 2, (f) F y in Scenario 3, (g) F z in Scenario 1, (h) F z in Scenario 2, (i) F z in Scenario 3, (j) T x in Scenario 1, (k) T x in Scenario 2, (l) T x in Scenario 3, (m) T y in Scenario 1, (n) T y in Scenario 2, (o) T y in Scenario 3, (p) T z in Scenario 1, (q) T z in Scenario 2, (r) T z in Scenario 3. The first column shows the results of Scenario 1. The second column shows the results of Scenario 2. The third column shows the results of Scenario 3.
Agriculture 14 00394 g008
Table 1. Results of latent variable encoding and generation.
Table 1. Results of latent variable encoding and generation.
ScenarioSamplesw1w2w3w4w5w6PEavg (m)OEavg (rad)Ratio (%)
140.3864−0.01621−0.3109−0.49890.4202−0.57300.0560.03255.50
220.4390−0.063390.1329−0.84680.2571−0.049620.0380.02580.96
32−0.6637−0.10990.6496−0.04189−0.3370−0.10100.0300.04048.80
430−0.5927−0.2432−0.4050−0.4467−0.27070.39080.0470.03874.71
w 1 , w 2 , w 3 , w 4 , w 5 , and w 6 denote the elements of the PCA eigenvector.
Table 2. Foam cutting forces during robot cutting at different platform heights.
Table 2. Foam cutting forces during robot cutting at different platform heights.
  +30 mm+20 mm+10 mm+00 mm−10 mm−20 mm−30 mmHuman
maximum2.4591.6460.2850.2330.2450.2480.3170.682
F x minimum−9.376−9.301−9.054−8.683−8.583−8.548−8.235−10.271
average−2.560−2.582−2.579−2.383−2.286−2.227−2.158−3.177
maximum34.40033.77832.59231.60430.91730.18029.11936.452
F y minimum−0.063−0.1550.001−0.031−0.031−0.133−0.237−0.189
average12.46311.74711.13610.55810.2009.7499.20212.985
maximum20.06820.78313.84312.38813.66015.09713.44713.313
F z minimum−21.865−19.712−22.068−19.825−16.831−13.500−12.252−11.229
average2.2892.6981.4680.9231.4672.1612.0392.084
maximum0.0280.0040.0040.0030.0030.0040.0080.016
T x minimum−9.282−9.112−8.871−8.682−8.548−8.392−8.178−8.478
average−3.237−3.105−2.940−2.827−2.741−2.658−2.540−2.747
maximum0.7270.4850.0460.0440.0440.0480.0610.041
T y minimum−2.371−2.335−2.315−2.280−2.259−2.249−2.215−2.241
average−0.653−0.643−0.635−0.619−0.607−0.598−0.571−0.657
maximum0.0030.0030.0020.0060.0040.0020.0030.001
T z minimum−0.227−0.227−0.204−0.217−0.213−0.204−0.192−0.202
average−0.047−0.040−0.036−0.036−0.038−0.037−0.033−0.041
Table 3. Performance of sheep carcass hindquarter cutting test from 2 samples.
Table 3. Performance of sheep carcass hindquarter cutting test from 2 samples.
CrieteSample 1Sample 2
Scenario 1Senario 2Scenario 3Scenario 1Scenario 2Scenario 3
Weight [kg]5.186.30
R R  [%]5.64.8
Maximal thickness [mm]3.13.8
| F x | max [N]6.4613.0564.0727.6557.4414.787
| F y | max [N]30.71924.78015.97231.98127.40420.432
| F z | max [N]17.96821.8554.18530.68622.8585.004
| T x | max [N]5.7084.9713.8035.7265.28704.808
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, K.; Xie, B.; Chen, Z.; Luo, Z.; Jiang, S.; Gao, Z. Human–Robot Skill Transferring and Inverse Velocity Admittance Control for Soft Tissue Cutting Tasks. Agriculture 2024, 14, 394. https://doi.org/10.3390/agriculture14030394

AMA Style

Liu K, Xie B, Chen Z, Luo Z, Jiang S, Gao Z. Human–Robot Skill Transferring and Inverse Velocity Admittance Control for Soft Tissue Cutting Tasks. Agriculture. 2024; 14(3):394. https://doi.org/10.3390/agriculture14030394

Chicago/Turabian Style

Liu, Kaidong, Bin Xie, Zhouyang Chen, Zhenhao Luo, Shan Jiang, and Zhen Gao. 2024. "Human–Robot Skill Transferring and Inverse Velocity Admittance Control for Soft Tissue Cutting Tasks" Agriculture 14, no. 3: 394. https://doi.org/10.3390/agriculture14030394

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop