Next Article in Journal
Illumination and Shadows in Head Rotation: Experiments with Denoising Diffusion Models
Previous Article in Journal
Lightweight Water Surface Object Detection Network for Unmanned Surface Vehicles
Previous Article in Special Issue
Wheel Drive Driverless Vehicle Handling and Stability Control Based on Multi-Directional Motion Coupling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Motion Adjustment Method for a Dual-Arm Transfer Robot Based on a Two-Level Neural Network and a Greedy Algorithm

1
School of Mechanical Engineering, Hebei University of Technology, Tianjin 300130, China
2
School of Automobile and Transportation, Chengdu Technological University, Chengdu 611730, China
3
Academy for Engineering & Technology, Fudan University, Shanghai 200433, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(15), 3090; https://doi.org/10.3390/electronics13153090
Submission received: 30 June 2024 / Revised: 24 July 2024 / Accepted: 2 August 2024 / Published: 5 August 2024
(This article belongs to the Special Issue Applications of Artificial Intelligence in Mechanical Engineering)

Abstract

:
As the manipulation object of a patient transfer robot is a human, which can be considered a complex and time-varying system, motion adjustment of a patient transfer robot is inevitable and essential for ensuring patient safety and comfort. This paper proposes a motion adjustment method based on a two-level deep neural network (DNN) and a greedy algorithm. First, a dataset including information about human posture and contact forces is collected by experiment. Then, the DNN, which is used to estimate contact force, is established and trained with the collected datasets. Furthermore, the adjustment is conducted by comparing the estimated contact force of the next state and the real contact force of the current state by a greedy algorithm. To assess the validity, first, we employed the DNN to estimate contact force and obtained the accuracy and speed of 84% and 30 ms, respectively (implemented with an affordable processing unit). Then, we applied the greedy algorithm to a dual-arm transfer robot and found that the motion adjustment could reduce the contact force and improve human comfort efficiently; these validated the effectiveness of our proposal and provided a new approach to adjust the posture of the care receiver for improving their comfort through reducing the contact force between human and robot.

1. Introduction

Nowadays, nursing care for disabled people has become very urgent in a rapidly aging society. Transferring a disabled person from or to a bed, a wheelchair, and a toilet is a daily task that brings a heavy burden to caregivers [1]. To alleviate the family burden, the development of the transfer nursing robot has attracted wide attention in recent years [2,3,4,5]. In general, a transfer nursing robot’s operation has six steps, namely, posture recognition, lifting-up process, motion adjustment, moving process, putting-down operation and homing, as shown in Figure 1. It should be noted that motion adjustment is crucial as human comfort can be improved by reducing the human burden (internal and external force) through it [6,7].
Comfort is a metric for measuring the performance of a dual-arm transfer robot [8]. It is a physical or psychological sense of the care receiver and can be evaluated by the internal force and the external force of humans [9]. Scholars have conducted some research to adjust the posture of a care receiver during transfer to reduce the forces. Mukai et al. [10] proposed a tactile-based motion adjustment method to reduce the contact force between a robot and a care receiver. They generated the motion trajectories of a robot using the tactile information-based interpolation between preset trajectories for tall and short persons [10,11]. However, because the proposed motion adjustment method was conducted by changing the horizontal distance between the two arms of the robot, it cannot reduce the contact force efficiently. Furthermore, the posture of a care receiver as a key factor affecting patient comfort was not considered. Hasegawa et al. [12] and Ding et al. [13] proposed a new motion-adjusting method by developing a mechanical model and human comfort evaluation. First, the internal forces and the external forces were estimated by using the developed mechanical model. Then, the comfort evaluation function was generated by weighting the obtained forces through a questionnaire method. Finally, the robot’s motion was adjusted by optimizing the human comfort level through the comfort evaluation function. However, due to the different sensitiveness of human comfort to each force and the complexity of human body structure, the accuracy of the developed mechanical model and the comfort evaluation function were low. Therefore, this method can be used neither to optimize internal and external forces on the human body nor to adjust robot trajectory. To tackle these problems, Delp et al. [14] and de Zee et al. [15] developed a musculoskeletal model to simulate the human body and the contacted objects for estimating the physical interaction between humans and objects (machines or robots). Based on the musculoskeletal model, many efficient controls were achieved to assist human rehabilitation or training. Ding et al. [16] developed a musculoskeletal model specialized for a transfer robot to estimate the muscle force, and they extended this model by using a dynamic model to consider the interaction between humans and robots, which can be used to estimate the contact force between humans and robots. A comfortable holding posture is estimated by minimizing the total activities of all muscles. However, processors with high specifications were required to achieve satisfactorily high speed [17]; this would increase the expenses and consequently limit the practicality of a patient-transfer robot.
To reduce the load of the care receiver and improve the comfort of them during transfer motion, this paper proposes a method for adjusting the robot trajectory based on the predicted contact force during transfer motion. The basic idea is that the contact force is related to the posture of a care receiver, and this relation can be modeled using machine learning. First, the contact forces between the human body and robot arm, which contains four dimensions: position (2D), size and direction, and the human posture, which was expressed by human joint positions, were collected as a dataset. Then, a two-level deep neural network (DNN) is constructed and trained on the collected data samples to predict the contact force. The trajectory adjustment is performed by comparing the contact force obtained from the DNN and the tactile sensor on the robot’s arm through a greedy algorithm. Finally, we practically applied the motion adjustment method to a dual-arm nursing care robot for adjusting the posture of the care receiver to evaluate the efficient of improving the transfer comfort.

2. Two-Level DNN for Contact Force Estimation

DNN is a mathematical regression model framework [18] that can establish the linear and nonlinear relationship between input and output. It is widely used in practical engineering research. The advantages of a strong nonlinear mapping ability and flexible network structure [19,20] can avoid the limited nonlinear expression of a mechanical model method [21]. Therefore, this study uses a two-level DNN to construct a mathematical model to express a relationship between the contact force and the lifting state.
As illustrated in Figure 2, first, the lifting state is encoded as five human joint positions (A, B, C, D, E), two lifting point positions (G, H), and human weight distribution (W1, W2, W3, W4). Furthermore, features are extracted from the encoded lifting state by using the first-level subnetwork. Then, the second-level subnetwork selects related features to compute the contact force. The local feature extraction function of the convolution layer can guarantee sufficient information about the lifting state [22]; meanwhile, the global feature fusion function of the self-attention mechanism can improve the nonlinear expression ability of the proposed mathematical model [23,24]. This ensures the high accuracy of a patient transfer robot’s operation. Thus, it is meaningful to examine the applicability of a DNN.

2.1. Encoder Design

The input of the first-level subnetwork is the lifting state including the posture and weight of a care receiver and the positions of lifting points. Because the multi-modalities are difficult to combine mathematically, inspired by the results presented in [25], this study used an encoder to transform the lift-state into a set of two-dimensional (2D) heatmaps and a set of 2D vector fields [26]. In this way, a unified data form for the first layer network input is generated. The diagram of the encoder is presented in Figure 3.
Heatmaps 1. The Gaussian function, which can transform a 2D point to a 2D metric, was used to convert the human joint position to a set of confidence maps S = (S1, S2, …, S5) with Equation (1). Here, to satisfy the requirements of a patient transfer robot [9], five human joints are selected to express a care receiver’s posture.
S i ( u h , v h , i ) = g ( A | | u h , v h p j i | | ) ,
where i denotes the index of a human joint and i ∈ {1, 2, …, 5}; (uh, vh) is the position of a pixel in the confidence map; pji ∈ ℝ1×2 is the position of joint i; Si(uh, vh, i) indicates the pixel value at position (uh, vh); g(·) is the Gaussian function; and A is a parameter of the Gaussian function.
Heatmaps 2. The lifting points confidence map S′ = (S1, S2) are generated as [27]
S i ( u h , v h , i ) = g ( A | | u h , v h h j i | | ) ,
where i denotes the lifting point index and i = 1, 2; (uh, vh)) is the position of a pixel in the confidence map; hji ∈ ℝ1×2 is the position of holding point i, which is obtained by the robot locating system; Si(uh, vh, i) is the pixel value at position (uh, vh); and A′ is a parameter of the Gaussian function.
Vector fields, the part affinity field (PAF) method, which can transform a line to a metric, was used to transform the human limb with gravity to a 2D vector field. The 2D vector fields of the four body parts are generated by the following:
w i P = v i     i f   P   o n   b o d y   p a r t   i   0     o t h e r w i s e
where wi(P) is the vector value at position P in wi, vi = (pipi′)/||pipi′||2 is the unit vector in the direction of a body part i, where pi and pi′ denote the endpoints of the body part i, and in this work, they indicate the joint positions.
The set of points on the limb is constructed of points within a distance threshold of the line segment, which are points that satisfy the following conditions:
0 v i p i P l i   and   v i p i P σ i
where li = ||pipi′||2 is the length of a body part i; P ∈ ℝ2×1 is a two-dimensional (2D) position in wi; vi is a vector perpendicular to vi; and σi is the width of body part i, and in this work, the value of σi is defined based on the experience by the following:
σ i = k × m i l i + t
where mi and li are the weight and length of a body part i, respectively, and k and t are parameters of the equation. The obtained vector fields are expressed as L = (L1, L2, …, L4).

2.2. First-Level Subnetwork

In the first subnetwork, the CNN [27] is used to extract local features of the lifting-up state. Previous studies have proven the effectiveness of the VGG16 on many image recognition datasets [28,29,30]. In particular, using convolution kernels with a size of 3 × 3 not only enhances the ability to extract local features but also increases the CNN depth; using a pooling layer reduces network parameters and prevents over-fitting of the network; using ReLU (Rectified Linear Unit) activation function layer enhances the nonlinear expression ability of the network [31]; these advantages may contribute to achieving high-accuracy and real-time contact force estimation. However, the input of a conventional VGG16 network should be a color image with a fixed size of 224 × 224 × 3 [27], which, in most cases, does not match the sizes of the obtained confidence maps S and S′ and vector field L. To address this problem, the filter size of the VGG16’s first layer is modified, making it able to generate feature maps that can match the second layer of the VGG16. The modified network is referred to as the M-VGG16 network.
The structure of the M-VGG16 network is presented in Figure 4. First, data of the transformed confidence maps S and S′ and vector fields L are concatenated to generate the input maps F1 ∈ ℝ56×56×11 of the first-level subnetwork. Then, F1 is analyzed by the revised first convolutional layer C:64-1, whose weights are initialized to 1 to match with the input of the C:64-2. The following maxpooling layer, P:2, is used to decrease the size of the feature map. Next, the obtained feature maps are analyzed by C:128-3, P:2, C:128-3, and P:2 sequentially to obtain the final feature map F, which is then used as the input of the next-level subnetwork. In the proposed network design, all convolutional layers, except the first one, are initialized by the corresponding pre-trained convolutional layer of the VGG16.

2.3. Second-Level Subnetwork

In the second-level subnetwork, a transformer-based backbone is used to extract the global features of the obtained feature maps and generate the contact force.
The structure of the second-level subnetwork is presented in Figure 5. The input of the second-level subnetwork, which is the feature map F (obtained from the first subnetwork), is processed by a transpose function and a max layer to adjust the input size from 7 × 7 × 256 to 7 × 256 to make it match with the size of the transformer input. The size is adjusted as follows:
F = F . t r a n s p o s e 0 , 2 , 1 . m a x ( 1 )
where transpose(0, 2, 1) represents a function that can transfer the position of the second and third dimensions of the feature maps; max(−1) is a function that extracts the maximum value of the last dimension, namely, the feature map size can be reduced from three dimensions (3D) to two dimensions (2D).
Next, a multi-head attention module is used to analyze the spatial relationship between the obtained features F′. As shown in Figure 6, F′ is split into eight heads that are multiplied with the weight matrices WQi, WKi, and WVi, i = 1, 2, …, 8, to obtain matrices Qi, Ki, and Vi that indicate the query, key, and value of them, respectively [32].
The similarity (Si) of Qi and Ki is calculated by the following:
S i = Q i × K i T d K i
where KiT is the transpose of Ki; dKi is the dimension of Ki, where i = (1, 2, …, 8) is the index of a head.
Further, the attention of a head (Zi) is calculated by weighted matching as follows:
Z i = s o f t max S i d K i V i
where dKi is the dimension of Ki; softmax(·) is a function that can normalize data into a value between zero and one, which can be used as a weight of Vi in weighted matching.
Afterward, the attention Zi from each head is concatenated to generate Z = (Z1, Z2, …, Z8). After the multi-head attention module, two shortcut connection layers (i.e., the blue arrows in Figure 5) and two normalization layers (green frame in Figure 5) are used to overcome the degradation and accelerate the convergence speed of a neural network [33]. To increase the nonlinear expression ability of the network, a fully connected feed-forward network, which consists of two linear transformation layers with an activation layer between them, is inserted after the first normalization layer. Moreover, six iterations are conducted on all network layers (i.e., the blue block in Figure 5). Finally, a liner layer is used to change the output size to two; namely, the output includes the contact force values on the human back and thigh.
It is worth mentioning that the loss function of the DNN is defined as a norm of the distance between the predicted result and the actual force, and it is expressed as follows:
f = 1 2 ( | |   F B F g B   | | 2 + | |   F T F g T   | | 2 ) ,
where FB and FT are the predicted contact force values on the human back and thigh, respectively; FBg and FTg are the actual contact force values on the human back and thigh, respectively.
A greedy algorithm can make greedy choices at each step to ensure that the objective function is optimized [34]. In this study, a greedy algorithm-based motion adjustment method is proposed to improve patient comfort. As relative positions of the holding points cannot be changed during the lifting-up operation, the lifting state is adjusted by changing the angle between the thigh and horizontal line (θ1) and the angle between the upper body and horizontal line (θ2). The lifting state adjustment steps are presented in Figure 7.
As shown in Figure 7, at the beginning of the method, an action set A is applied to the current lifting state to generate the next virtual lifting state by a virtual human-machine system. The action set A = (a1, a2, …, a9) has nine actions, as presented in Table 1. As shown in Table 1, action is defined as a change in an angle of 0 or 5 degrees, which will be added to θ1 or θ2. The next lifting state S_ = (s1_, s2_, …, s9_) has nine candidate virtual lifting states obtained from the corresponding nine actions. The specific steps of the virtual lifting state generation are as follows:
(1) Delineate the human skeleton by connecting the head and the midpoint of shoulders (L&R), hips (L&R), knees (L&R), and ankles (L&R), as shown in Figure 8.
(2) Add an action ai = (σ1, σ2) on the current lifting state by rotating line 123 and the lifting point 6 σ1 degree around point 3 and rotating line 345 and the lifting point 7 σ2 degree around point 3.
(3) Record the new lifting state after action ai is conducted as the next virtual lifting state denoted by si_.
The next virtual lifting state set S_ is used as the input of the proposed DNN to obtain the contact force set F = (f1, f2, …, f9). Further, the minimum contact force fm is obtained by the following:
f m = m i n ( F )
where min(·) is a minimum function to find the minimum value of a list.
Next, compare fm with the contact force f obtained by a tactile sensor on the robot’s arm corresponding to the current lifting state. If fmf, the adjustment process terminates; otherwise, action am is performed to generate a new fm of the real robot system to update its current lifting state and contact force, as indicated by the dashed red arrow in Figure 7; finally, the system conducts an iteration from the beginning, until the adjustment breaks.

3. Experimental Results

To assess the validity, first, we collected a new dataset for training the developed DNN. Then, we employed the DNN to estimate contact forces and compared the accuracy and speed with the other two methods commonly used in contact-force estimation. Then, the DNN was applied to a dual-arm nursing-care robot for motion adjustment and verified its effectiveness.

3.1. Validation of the Developed DNN in Nursing Environment

(1) Dataset collection: The data collection experiment was conducted on a dual-arm robot platform. As shown in Figure 9, the robot comprises a head, chassis, body, and robotic arm. The robot’s arms are segmented into upper arms and forearms, linked by elbow joints. The complete robotic arm is attached to the body via a shoulder joint. The joints linking the body and chassis include the lumbar and hip joints. The robot stands at a height of 1350 mm with a body thickness of around 1000 mm. The distance between the shoulder joints is approximately 688 mm, the maximum arm diameter is 100 mm, and the total mass is 150 kg.
The experiment recruited 50 subjects (20 females and 30 males). The age of subjects ranges from 22 to 65 Y/O, and the average age is 34 Y/O. The heights and weights of the subjects were also different from each other. The heights are from 1.50 m to 1.85 m, and the weights are from 47 kg to 72 kg. As shown in Figure 9, a weight scale was used to weigh the subjects. A total of 10 blocks, evenly distributed on the back and legs of each subject, were selected as candidate lifting points and denoted by (B1, B2, …, B5) and (T1, T2, …, T5), and nine markers were covered on human joint to mark the position of human joint, see Figure 10. A multiple-calibrated motion capture device with a frame rate of 10 Hz was used to record the motion of the joint markers. Two tactile sensors covering the robot’s arms were used to record the contact forces between the human and the robot.
In the data collection process, a subject was lifted and then adjusted by the robot from the initial state to the final state along a preset trajectory, as shown in Figure 11. The dataset consists of 500,000 samples, each of which included the positions of the human joint and lifting point, human weight, and contact force. The data augmentation techniques [35] were applied to the data, and the data were split into two sets: an evaluation set consisting of data from five actors (two females and three males) and a training set consisting of data from 45 actors (18 females and 27 males). The new dataset was named the contact force for patient transfer robot (CFPR) dataset. A few typical examples of the data samples are depicted in Figure 12.
(2) Contact force estimation: The experiment was conducted in the PyTorch environment. The training and testing processes were run on a PC with an Intel Core i7-6700HQ CPU, 8 GB RAM, and 4G NVIDIA GeForce GTX 950M GPU. The proposed method was compared with two commonly used methods in lifting-force estimation on the LFPR dataset.
(3) Evaluation metric: To evaluate the proposed model’s performance, the average accuracy (AA) was used as an evaluation metric, and its calculation method was as follows. First, the 10 N rule was used to evaluate the correctness of estimation. In particular, an estimation was regarded as correct when the error between the predicted value and the true value was less than 10 N. In addition, the ratio of the number of correct estimations to the total number of estimations (CE-to-TE ratio) was used to assess the accuracy of contact force estimation. Finally, the average CE-to-TE value was regarded as average accuracy.
The results of the average accuracy and speed of the total contact force are presented in Table 2. The study by Mukai et al. [12] had the fastest speed, but its accuracy is the lowest among all methods. This is because this method used the mechanical lifting-up model where a human is regarded as a two-link object [13], while the posture of the lower body was ignored. The accuracy of the method proposed in [16] was 42% lower than that of the method proposed in this study. This is because the method developed in [16] used the mechanical model that could not define the contact parameters of the human and robot. In contrast, the proposed method could achieve the highest precision, and the speed of it can satisfy the requirements of patient-transfer robots (<500 ms).

3.2. Practical Application to Dual-Arm Patient Transfer Robot

After verifying the effectiveness of the proposed method in estimating the lifting forces, we compared a physiological signal, average contact force, and the comfort level perceived by the human body that served as indicators of their comfort levels throughout the lifting process [11].
The experiments were conducted on 10 healthy subjects (seven males and three females), and the subjects’ statistics are given in Table 3. As shown in Table 3, the age of subjects ranges from 21 to 56 years old, having an average age of 35 years old. The weights of subjects are from 52 kg to 81 kg. The contact force and comfort of each subject were measured before and after adjusting the lifting state. In the experimental process, each subject was lifted and adjusted through the proposed motion adjustment method. The subject’s comfort was obtained by questionnaire survey, and the contact force data were collected by a tackle sensor installed on the robot’s arms. The subject’s comfort (both before and after the adjustment) was rated on a one-to-six scale, where “1” means very uncomfortable, “2” stands for uncomfortable, “3” denotes little uncomfortable, “4” represents little comfortable, “5” means comfortable, and “6” denote very comfortable. In order to ensure the safety of the transfer process, the entire process requires nursing staff to monitor beside.
The contact force and patient comfort before and after the adjustment are presented in Table 4 and Table 5, respectively. The data show that the contact force of 9 out of 10 subjects is reduced, and the comfort level of 8 out of 10 subjects is improved after the motion adjustment. The experimental results demonstrate that the proposed method can effectively reduce the contact force and improve comfort. Typical adjustment examples are presented in Figure 13.

4. Conclusions

This paper proposes a motion adjustment method to improve the comfort of a patient-transfer robot. First, a two-level DNN was developed to estimate the contact force between humans and robots. Then, the robot’s motion was adjusted by a greedy algorithm. The experimental results demonstrate that the proposed motion adjustment method can effectively reduce the contact force between a patient transfer robot and a patient while improving patient comfort.
The results presented in this study validate the effectiveness of the proposed method and provide a new way to estimate the contact force between a care receiver and a robot. In addition, the proposed method allows a patient-transfer robot to insert its two arms into the back and lower thighs of a care receiver autonomously.
In the future, some other factors affecting transferring comfort can be explored further; meanwhile, a more comfortable holding point could be selected when lifting a care receiver. In addition, the proposed method based on DNN can be used for the recognition of an optimal comfortable posture while designing human apparatuses or furniture.

Author Contributions

Validation, Z.Y.; formal analysis, Q.L.; investigation, K.W.; writing—original draft preparation, M.C.; writing—review and editing, M.C.; funding acquisition, S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the “S&T Program of Hebei, grant number 22372001D” and “The National Key Research and Development Plan of China, grant number 2021YFC0122704”.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the ethics committee of Fudan University (approval code: FE23118R; approval date: 4 June 2023). All participants provided written informed consent.

Informed Consent Statement

All participants provided written informed consent.

Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mukai, T.; Hirano, S.; Nakashima, H.; Sakaida, Y.; Guo, S. Realization and safety measures of patient transfer by nursing-care assistant robot riba with tactile sensors. J. Robot. Mechatron. 2011, 23, 360–369. [Google Scholar] [CrossRef]
  2. Ding, M.; Ikeura, R.; Mori, Y.; Mukai, T.; Hosoe, S. Measurement of human body stiffness for lifting-up motion generation using nursing-care assistant robot—RIBA. In Proceedings of the SENSORS, 2013 IEEE: Piscataway, NJ, USA, Baltimore, MD, USA, 3–6 November 2013; pp. 1–4. [Google Scholar] [CrossRef]
  3. Mukai, T.; Hirano, S.; Nakashima, H.; Yoshida, M.; Guo, S.; Hayakawa, Y. Manipulation using tactile information for a nursing-care assistant robot in whole-body contact with the object. Trans. Jpn. Soc. Mech. Eng. Ser. C 2011, 77, 3794–3807. [Google Scholar] [CrossRef]
  4. Mukai, T.; Onishi, M.; Odashima, T.; Hirano, S.; Luo, Z. Development of the tactile sensor system of a human-interactive robot “RI-MAN”. IEEE Trans. Robot. 2008, 24, 505–512. [Google Scholar] [CrossRef]
  5. Liu, Y.; Guo, S.; Yin, Y.; Jiang, Z.; Liu, T. Design and compliant control of a piggyback transfer robot. J. Mech. Robot. 2022, 14, 031009. [Google Scholar] [CrossRef]
  6. Liu, Y.; Chen, G.; Liu, J.; Guo, S.; Mukai, T. Biomimetic design of a chest carrying nursing-care robot for transfer task. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), Kuala Lumpur, Malaysia, 12–15 December 2018. [Google Scholar]
  7. Li, Y.; Guo, S.; Zhu, L.; Mukai, T.; Gan, Z. Enhanced probabilistic inference algorithm using probabilistic neural networks for learning control. IEEE Access 2019, 7, 184457–184467. [Google Scholar] [CrossRef]
  8. Li, Y.; Guo, S.; Zhu, L.; Mukai, T.; Gan, Z. A recurrent reinforcement learning approach applicable to highly uncertain environments. Int. J. Adv. Robot. Syst. 2020, 17, 1729881420916258. [Google Scholar] [CrossRef]
  9. Mukai, T.; Hirano, S.; Nakashima, H.; Kato, Y.; Sakaida, Y.; Guo, S.; Hosoe, S. Development of a nursing-care assistant robot RIBA that can lift a human in its arms. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010. [Google Scholar]
  10. Mukai, T.; Hirano, S.; Yoshida, M.; Nakashima, H.; Guo, S.; Hayakawa, Y. Whole-body contact manipulation using tactile information for the nursing-care assistant robot riba. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011. [Google Scholar]
  11. Mukai, T.; Hirano, S.; Yoshida, M.; Nakashima, H.; Guo, S.; Hayakawa, Y. Tactile-based motion adjustment for the nursing-care assistant robot RIBA. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011. [Google Scholar]
  12. Zyada, Z.; Hayakawa, Y.; Hosose, S. Kinematic analysis of a two-link object for whole arm manipulation. In Proceedings of the Proceedings of the 9th WSEAS International Conference on Signal Processing, Robotics and Automation, Stevens Point, WI, USA, 20–22 February 2010. [Google Scholar]
  13. Ding, M.; Ikeura, R.; Mukai, T.; Nagashima, H.; Hirano, S.; Matsuo, K.; Hosoe, S. Comfort estimation during lift-up using nursing-care robot—RIBA. In Proceedings of the 2012 First International Conference on Innovative Engineering Systems, Alexandria, Egypt, 7–9 December 2012. [Google Scholar]
  14. Ji, Z.; Wang, H.; Jiang, G.; Li, L. Analysis of muscle activity utilizing bench presses in the AnyBody simulation modelling system. Model. Simul. Eng. 2016, 1, 3649478. [Google Scholar] [CrossRef]
  15. Ueda, J.; Ming, D.; Krishnamoorthy, V.; Shinohara, M.; Ogasawara, T. Individual muscle control using an exoskeleton robot for muscle function testing. IEEE Trans. Neural Syst. Rehabil. Eng. 2010, 18, 339–350. [Google Scholar] [CrossRef]
  16. Ding, M.; Matsubara, T.; Funaki, Y.; Ikeura, R.; Mukai, T.; Ogasawara, T. Generation of comfortable lifting motion for a human transfer assistant robot. Int. J. Intell. Robot. Appl. 2017, 1, 74–85. [Google Scholar] [CrossRef]
  17. Frazier, R.M.; Carter-Templeton, H.; Wyatt, T.H.; Wu, L. Current trends in robotics in nursing patents—A glimpse into emerging innovations. CIN Comput. Inform. Nurs. 2019, 37, 290–297. [Google Scholar] [CrossRef]
  18. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  19. Kim, H.; Phan, T.Q.; Hong, W.; Chun, S.Y. Physiology-based augmented deep neural network frameworks for ECG biometrics with short ECG pulses considering varying heart rates. Pattern Recognit. Lett. 2022, 156, 1–6. [Google Scholar] [CrossRef]
  20. Yan, P.; Duan, S.; Luo, X.; Zhang, N.; Deng, Y. Development and validation of a deep neural network–based model to predict acute kidney injury following intravenous administration of iodinated contrast media in hospitalized patients with chronic kidney disease: A multicohort analysis. Nephrol. Dial. Transplant. 2022, 38, 352–361. [Google Scholar] [CrossRef] [PubMed]
  21. Olamat, A.; Ozel, P.; Atasever, S. Deep learning methods for multi-channel EEG-based emotion recognition. Int. J. Neural Syst. 2022, 32, 2250021. [Google Scholar] [CrossRef] [PubMed]
  22. Yun, M.; Kim, J.; Do, K. Estimation of wave-breaking index by learning nonlinear relation using multilayer neural network. J. Mar. Sci. Eng. 2022, 10, 50–66. [Google Scholar] [CrossRef]
  23. Gao, Q.; Liu, J.; Ju, Z.; Zhang, X. Dual-hand detection for human–robot interaction by a parallel network based on hand detection and body pose estimation. IEEE Trans. Ind. Electron. 2019, 66, 9663–9672. [Google Scholar] [CrossRef]
  24. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
  25. Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  26. Cao, Z.; Hidalgo, G.; Simon, T.; Wei, S.E.; Sheikh, Y. OpenPose: Realtime multi-person 2D pose estimation using part affinity fields. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 172–186. [Google Scholar] [CrossRef] [PubMed]
  27. Chen, M.; Wu, J.; Li, S.; Liu, J.; Yokota, H.; Guo, S. Accurate and real-time human-joint-position estimation for a patient-transfer robot using a two-level convolutional neutral network. Robot. Auton. Syst. 2021, 139, 103735. [Google Scholar] [CrossRef]
  28. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  29. Jiang, S.; Min, W.; Mei, S. Hierarchy-dependent cross-platform multi-view feature learning for venue category prediction. IEEE Trans. Multimed. 2018, 21, 1609–1619. [Google Scholar] [CrossRef]
  30. Min, W.; Mei, S.; Liu, L.; Wang, Y.; Jiang, S. Multi-task deep relative attribute learning for visual urban perception. IEEE Trans. Image Process. 2019, 29, 657–669. [Google Scholar] [CrossRef] [PubMed]
  31. Abdeljawad, A.; Grohs, P. Approximations with deep neural networks in sobolev time-space. Anal. Appl. 2022, 20, 499–541. [Google Scholar] [CrossRef]
  32. Lin, Z.; Akin, H.; Rao, R.; Hie, B.; Zhu, Z.; Lu, W.; Rives, A. Evolutionary-scale prediction of atomic-level protein structure with a language model. Science 2023, 379, 1123–1130. [Google Scholar] [CrossRef] [PubMed]
  33. Vokoun, C.R.; Huang, X.; Jackson, M.B.; Basso, M.A. Response normalization in the superficial layers of the superior colliculus as a possible mechanism for saccadic averaging. J. Neurosci. 2014, 34, 7976–7987. [Google Scholar] [CrossRef] [PubMed]
  34. Tang, J.; Tang, X.; Lim, A.; Han, K.; Li, C.; Yuan, J. Revisiting modified greedy algorithm for monotone submodular maximization with a knapsack constraint. Proc. ACM Meas. Anal. Comput. Syst. 2021, 5, 1–22. [Google Scholar]
  35. Moon, G.; Chang, J.Y.; Suh, Y.; Lee, K.M. Holistic planimetric prediction to local volumetric prediction for 3d human pose estimation. arXiv 2017, arXiv:1706.0475. [Google Scholar]
Figure 1. Operational steps of a patient transfer robot.
Figure 1. Operational steps of a patient transfer robot.
Electronics 13 03090 g001
Figure 2. Structure of the developed network.
Figure 2. Structure of the developed network.
Electronics 13 03090 g002
Figure 3. Function of the encoder.
Figure 3. Function of the encoder.
Electronics 13 03090 g003
Figure 4. Structure of the first level subnetwork. S is the heatmap 1, L is the vector field, S′ is the heatmap 2, C: X-Y represents a convolutional layer, which includes X convolutional kernels with the size of Y × Y, and P:N is a maxpooling layer, where N is the stride of filter.
Figure 4. Structure of the first level subnetwork. S is the heatmap 1, L is the vector field, S′ is the heatmap 2, C: X-Y represents a convolutional layer, which includes X convolutional kernels with the size of Y × Y, and P:N is a maxpooling layer, where N is the stride of filter.
Electronics 13 03090 g004
Figure 5. Structure of the second level subnetwork. F, F′, Z, and Z′ are feature maps, N is the number of iterations.
Figure 5. Structure of the second level subnetwork. F, F′, Z, and Z′ are feature maps, N is the number of iterations.
Electronics 13 03090 g005
Figure 6. Structure of the multi-head attention module.
Figure 6. Structure of the multi-head attention module.
Electronics 13 03090 g006
Figure 7. Flowchart for lifting state adjustment. A represents a virtual action set; a is an action in the action set A; s is the current lifting state, and s_ is the next lifting state, which is generated from s and a through a real human-machine system; S_ is the next virtual lifting state set. The virtual human-machine system functions to generate the next virtual lifting state set based on s and an A. While the real human-machine system functions to generate the real next lifting state based on s and a, DNN is the proposed neural network, and Min() is a minimized function, which is capable of choosing a minimum value from an array. F is the virtual contact force set, while f is the real contact force. Arrow in dash functions as updating.
Figure 7. Flowchart for lifting state adjustment. A represents a virtual action set; a is an action in the action set A; s is the current lifting state, and s_ is the next lifting state, which is generated from s and a through a real human-machine system; S_ is the next virtual lifting state set. The virtual human-machine system functions to generate the next virtual lifting state set based on s and an A. While the real human-machine system functions to generate the real next lifting state based on s and a, DNN is the proposed neural network, and Min() is a minimized function, which is capable of choosing a minimum value from an array. F is the virtual contact force set, while f is the real contact force. Arrow in dash functions as updating.
Electronics 13 03090 g007
Figure 8. Sequences of generating the virtual lifting state.
Figure 8. Sequences of generating the virtual lifting state.
Electronics 13 03090 g008
Figure 9. Platform for data collection.
Figure 9. Platform for data collection.
Electronics 13 03090 g009
Figure 10. Markers for data collection.
Figure 10. Markers for data collection.
Electronics 13 03090 g010
Figure 11. Lifting state adjustment for data collection.
Figure 11. Lifting state adjustment for data collection.
Electronics 13 03090 g011
Figure 12. Typical examples of the CFPR dataset. The first column shows the position of human joints and the lifting points, the second column indicates the weight of the subjects, and the last column depicts the contact force on thigh and back of the experimenters.
Figure 12. Typical examples of the CFPR dataset. The first column shows the position of human joints and the lifting points, the second column indicates the weight of the subjects, and the last column depicts the contact force on thigh and back of the experimenters.
Electronics 13 03090 g012
Figure 13. Examples of lifting state adjustment. (a) the lifting state before adjustment; (b) the lifting state after adjustment.
Figure 13. Examples of lifting state adjustment. (a) the lifting state before adjustment; (b) the lifting state after adjustment.
Electronics 13 03090 g013
Table 1. Action sets.
Table 1. Action sets.
Actiona1a2a4a5a6a7a8a9
σ1, σ20, 00, 55, 05, 55, −5−5, 0−5, 5−5, −5
Table 2. The results of the AA and speed of the lifting-force estimation.
Table 2. The results of the AA and speed of the lifting-force estimation.
MethodBackThighAverageSpeed (ms)
Method in [12]520.112.61
Method in [16]2162.541.82
Our method80.787.384.030
Table 3. Information on subjects participating in the tests.
Table 3. Information on subjects participating in the tests.
SubjectsGenderAge (Y/O)Weight (kg)
Sub1Male2981
Sub2Female4166
Sub3Male2978
Sub4Female3552
Sub5Male4465
Sub6Male5670
Sub7Male3076
Sub8Female3065
Sub9Male2176
Sub10Male4660
Table 4. Contact force change of subjects (N).
Table 4. Contact force change of subjects (N).
SubjectsThighBack
BeforeAfterChangeBeforeAfterChange
Sub1512.75436.15−2.96161.4779.61−79.20
Sub2447.07444.11−65.23209.10129.90−72.32
Sub3477.16411.93−75.53189.67117.35−36.26
Sub4322.27246.74−16.22129.8593.59−34.94
Sub5375.87359.65−6.22111.0376.09−39.63
Sub6423.66417.44−102.05183.40143.77−36.92
Sub7387.91285.86−1.89158.71121.79+41.00
Sub8359.55357.66−56.0045.7586.75−78.23
Sub9423.25367.25+43.81130.7152.48+60.38
Sub10278.46322.27−76.6069.46129.85−81.86
Table 5. Change in the comfort level obtained from the questionnaire.
Table 5. Change in the comfort level obtained from the questionnaire.
SubjectBeforeAfterChange
Sub145+1
Sub225+3
Sub354+1
Sub423+1
Sub523+1
Sub634+1
Sub714+3
Sub814+3
Sub943−1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, M.; Liu, Q.; Wang, K.; Yang, Z.; Guo, S. An Efficient Motion Adjustment Method for a Dual-Arm Transfer Robot Based on a Two-Level Neural Network and a Greedy Algorithm. Electronics 2024, 13, 3090. https://doi.org/10.3390/electronics13153090

AMA Style

Chen M, Liu Q, Wang K, Yang Z, Guo S. An Efficient Motion Adjustment Method for a Dual-Arm Transfer Robot Based on a Two-Level Neural Network and a Greedy Algorithm. Electronics. 2024; 13(15):3090. https://doi.org/10.3390/electronics13153090

Chicago/Turabian Style

Chen, Mengqian, Qiming Liu, Kai Wang, Zhiqiang Yang, and Shijie Guo. 2024. "An Efficient Motion Adjustment Method for a Dual-Arm Transfer Robot Based on a Two-Level Neural Network and a Greedy Algorithm" Electronics 13, no. 15: 3090. https://doi.org/10.3390/electronics13153090

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop