sensors-logo

Journal Browser

Journal Browser

Sensing, Estimating, and Analyzing Human Movements for Human–Robot Interaction

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (30 April 2024) | Viewed by 21357

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
Interests: artificial intelligence; computer vision; video code; machine learning

E-Mail Website
Guest Editor
AI Research Institute, Harbin Institute of Technology, Shenzhen 518055, China
Interests: artificial intelligence; computer science and engineering
School of Mechatronics Engineering, Harbin Institute of Technology, Harbin 150001, China
Interests: biomedical signal processing; wearable robots

Special Issue Information

Dear Colleagues,

Recent advances in human–robot interaction (HRI) are playing an increasingly pivotal role in a wide spectrum of robots, ranging from household to industrial, and from virtual interaction to closely physical collaboration. Due to the core function in HRI systems, numerous efforts and intensive attentions are paid to sensing, estimating, and analyzing the continuous and high-dimensional human movements so as to semantically decode and reflect motor intent and even latent beliefs of human motor control. The purpose of this Special Issue is therefore to describe the state of the art in human neuromuscular and cognitive behaviors reflected by human movements and to present the challenges associated with leveraging such knowledge in human-centered design and control of HRI systems.

This Special Issue aims to present the latest results and emerging algorithmic techniques of sensing, estimating, and analyzing human movements in human–robot interaction. This fits the scope of Sensors as algorithms are used to process the information collected by sensors and sensor networks.

Prof. Dr. Feng Jiang
Prof. Dr. Jie Liu
Dr. Chunzhi Yi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Human–robot interaction
  • Human movement analysis
  • Human augmentation
  • Inner belief estimation
  • Neuromuscular control
  • Human intent perception
  • Bio-inspired design and control of robots

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 9319 KiB  
Article
Mapping Method of Human Arm Motion Based on Surface Electromyography Signals
by Yuanyuan Zheng, Gang Zheng, Hanqi Zhang, Bochen Zhao and Peng Sun
Sensors 2024, 24(9), 2827; https://doi.org/10.3390/s24092827 - 29 Apr 2024
Viewed by 251
Abstract
This paper investigates a method for precise mapping of human arm movements using sEMG signals. A multi-channel approach captures the sEMG signals, which, combined with the accurately calculated joint angles from an Inertial Measurement Unit, allows for action recognition and mapping through deep [...] Read more.
This paper investigates a method for precise mapping of human arm movements using sEMG signals. A multi-channel approach captures the sEMG signals, which, combined with the accurately calculated joint angles from an Inertial Measurement Unit, allows for action recognition and mapping through deep learning algorithms. Firstly, signal acquisition and processing were carried out, which involved acquiring data from various movements (hand gestures, single-degree-of-freedom joint movements, and continuous joint actions) and sensor placement. Then, interference signals were filtered out through filters, and the signals were preprocessed using normalization and moving averages to obtain sEMG signals with obvious features. Additionally, this paper constructs a hybrid network model, combining Convolutional Neural Networks and Artificial Neural Networks, and employs a multi-feature fusion algorithm to enhance the accuracy of gesture recognition. Furthermore, a nonlinear fitting between sEMG signals and joint angles was established based on a backpropagation neural network, incorporating momentum term and adaptive learning rate adjustments. Finally, based on the gesture recognition and joint angle prediction model, prosthetic arm control experiments were conducted, achieving highly accurate arm movement prediction and execution. This paper not only validates the potential application of sEMG signals in the precise control of robotic arms but also lays a solid foundation for the development of more intuitive and responsive prostheses and assistive devices. Full article
Show Figures

Figure 1

24 pages, 4796 KiB  
Article
sEMG-Based Robust Recognition of Grasping Postures with a Machine Learning Approach for Low-Cost Hand Control
by Marta C. Mora, José V. García-Ortiz and Joaquín Cerdá-Boluda
Sensors 2024, 24(7), 2063; https://doi.org/10.3390/s24072063 - 23 Mar 2024
Viewed by 575
Abstract
The design and control of artificial hands remains a challenge in engineering. Popular prostheses are bio-mechanically simple with restricted manipulation capabilities, as advanced devices are pricy or abandoned due to their difficult communication with the hand. For social robots, the interpretation of human [...] Read more.
The design and control of artificial hands remains a challenge in engineering. Popular prostheses are bio-mechanically simple with restricted manipulation capabilities, as advanced devices are pricy or abandoned due to their difficult communication with the hand. For social robots, the interpretation of human intention is key for their integration in daily life. This can be achieved with machine learning (ML) algorithms, which are barely used for grasping posture recognition. This work proposes an ML approach to recognize nine hand postures, representing 90% of the activities of daily living in real time using an sEMG human–robot interface (HRI). Data from 20 subjects wearing a Myo armband (8 sEMG signals) were gathered from the NinaPro DS5 and from experimental tests with the YCB Object Set, and they were used jointly in the development of a simple multi-layer perceptron in MATLAB, with a global percentage success of 73% using only two features. GPU-based implementations were run to select the best architecture, with generalization capabilities, robustness-versus-electrode shift, low memory expense, and real-time performance. This architecture enables the implementation of grasping posture recognition in low-cost devices, aimed at the development of affordable functional prostheses and HRI for social robots. Full article
Show Figures

Figure 1

19 pages, 3767 KiB  
Article
A Generative Model to Embed Human Expressivity into Robot Motions
by Pablo Osorio, Ryusuke Sagawa, Naoko Abe and Gentiane Venture
Sensors 2024, 24(2), 569; https://doi.org/10.3390/s24020569 - 16 Jan 2024
Cited by 1 | Viewed by 1006
Abstract
This paper presents a model for generating expressive robot motions based on human expressive movements. The proposed data-driven approach combines variational autoencoders and a generative adversarial network framework to extract the essential features of human expressive motion and generate expressive robot motion accordingly. [...] Read more.
This paper presents a model for generating expressive robot motions based on human expressive movements. The proposed data-driven approach combines variational autoencoders and a generative adversarial network framework to extract the essential features of human expressive motion and generate expressive robot motion accordingly. The primary objective was to transfer the underlying expressive features from human to robot motion. The input to the model consists of the robot task defined by the robot’s linear velocities and angular velocities and the expressive data defined by the movement of a human body part, represented by the acceleration and angular velocity. The experimental results show that the model can effectively recognize and transfer expressive cues to the robot, producing new movements that incorporate the expressive qualities derived from the human input. Furthermore, the generated motions exhibited variability with different human inputs, highlighting the ability of the model to produce diverse outputs. Full article
Show Figures

Figure 1

20 pages, 1562 KiB  
Article
Sensing the Intentions to Speak in VR Group Discussions
by Jiadong Chen, Chenghao Gu, Jiayi Zhang, Zhankun Liu and Shin‘ichi Konomi
Sensors 2024, 24(2), 362; https://doi.org/10.3390/s24020362 - 07 Jan 2024
Cited by 1 | Viewed by 965
Abstract
While virtual reality (VR) technologies enable remote communication through the use of 3D avatars, it is often difficult to foster engaging group discussions without addressing the limitations to the non-verbal communication among distributed participants. In this paper, we discuss a technique to detect [...] Read more.
While virtual reality (VR) technologies enable remote communication through the use of 3D avatars, it is often difficult to foster engaging group discussions without addressing the limitations to the non-verbal communication among distributed participants. In this paper, we discuss a technique to detect the intentions to speak in group discussions by tapping into intricate sensor data streams from VR headsets and hand-controllers. To this end, we developed a prototype VR group discussion app equipped with comprehensive sensor data-logging functions and conducted an experiment of VR group discussions (N = 24). We used the quantitative and qualitative experimental data to analyze participants’ experiences of group discussions in relation to the temporal patterns of their different speaking intentions. We then propose a sensor-based mechanism for detecting speaking intentions by employing a sampling strategy that considers the temporal patterns of speaking intentions, and we verify the feasibility of our approach in group discussion settings. Full article
Show Figures

Figure 1

24 pages, 5599 KiB  
Article
Comparative Analysis of the Clustering Quality in Self-Organizing Maps for Human Posture Classification
by Lisiane Esther Ekemeyong Awong and Teresa Zielinska
Sensors 2023, 23(18), 7925; https://doi.org/10.3390/s23187925 - 15 Sep 2023
Cited by 1 | Viewed by 1374
Abstract
The objective of this article is to develop a methodology for selecting the appropriate number of clusters to group and identify human postures using neural networks with unsupervised self-organizing maps. Although unsupervised clustering algorithms have proven effective in recognizing human postures, many works [...] Read more.
The objective of this article is to develop a methodology for selecting the appropriate number of clusters to group and identify human postures using neural networks with unsupervised self-organizing maps. Although unsupervised clustering algorithms have proven effective in recognizing human postures, many works are limited to testing which data are correctly or incorrectly recognized. They often neglect the task of selecting the appropriate number of groups (where the number of clusters corresponds to the number of output neurons, i.e., the number of postures) using clustering quality assessments. The use of quality scores to determine the number of clusters frees the expert to make subjective decisions about the number of postures, enabling the use of unsupervised learning. Due to high dimensionality and data variability, expert decisions (referred to as data labeling) can be difficult and time-consuming. In our case, there is no manual labeling step. We introduce a new clustering quality score: the discriminant score (DS). We describe the process of selecting the most suitable number of postures using human activity records captured by RGB-D cameras. Comparative studies on the usefulness of popular clustering quality scores—such as the silhouette coefficient, Dunn index, Calinski–Harabasz index, Davies–Bouldin index, and DS—for posture classification tasks are presented, along with graphical illustrations of the results produced by DS. The findings show that DS offers good quality in posture recognition, effectively following postural transitions and similarities. Full article
Show Figures

Figure 1

18 pages, 6705 KiB  
Article
A Multi-Target Localization and Vital Sign Detection Method Using Ultra-Wide Band Radar
by Jingwen Zhang, Qingjie Qi, Huifeng Cheng, Lifeng Sun, Siyun Liu, Yue Wang and Xinlei Jia
Sensors 2023, 23(13), 5779; https://doi.org/10.3390/s23135779 - 21 Jun 2023
Cited by 3 | Viewed by 1364
Abstract
Life detection technology using ultra-wideband (UWB) radar is a non-contact, active detection technology, which can be used to search for survivors in disaster rescues. The existing multi-target detection method based on UWB radar echo signals has low accuracy and has difficulty extracting breathing [...] Read more.
Life detection technology using ultra-wideband (UWB) radar is a non-contact, active detection technology, which can be used to search for survivors in disaster rescues. The existing multi-target detection method based on UWB radar echo signals has low accuracy and has difficulty extracting breathing and heartbeat information at the same time. Therefore, this paper proposes a new multi-target localization and vital sign detection method using ultra-wide band radar. A target recognition and localization method based on permutation entropy (PE) and K means++ clustering is proposed to determine the number and position of targets in the environment. An adaptive denoising method for vital sign extraction based on ensemble empirical mode decomposition (EEMD) and wavelet analysis (WA) is proposed to reconstruct the breathing and heartbeat signals of human targets. A heartbeat frequency extraction method based on particle swarm optimization (PSO) and stochastic resonance (SR) is proposed to detect the heartbeat frequency of human targets. Experimental results show that the PE—K means++ method can successfully recognize and locate multiple human targets in the environment, and its average relative error is 1.83%. Using the EEMD–WA method can effectively filter the clutter signal, and the average relative error of the reconstructed respiratory signal frequency is 4.27%. The average relative error of heartbeat frequency detected by the PSO–SR method was 6.23%. The multi-target localization and vital sign detection method proposed in this paper can effectively recognize all human targets in the multi-target scene and provide their accurate location and vital signs information. This provides a theoretical basis for the technical system of emergency rescue and technical support for post-disaster rescue. Full article
Show Figures

Figure 1

35 pages, 15515 KiB  
Article
Design and Evaluation of an Alternative Control for a Quad-Rotor Drone Using Hand-Gesture Recognition
by Siavash Khaksar, Luke Checker, Bita Borazjan and Iain Murray
Sensors 2023, 23(12), 5462; https://doi.org/10.3390/s23125462 - 09 Jun 2023
Viewed by 1064
Abstract
Gesture recognition is a mechanism by which a system recognizes an expressive and purposeful action made by a user’s body. Hand-gesture recognition (HGR) is a staple piece of gesture-recognition literature and has been keenly researched over the past 40 years. Over this time, [...] Read more.
Gesture recognition is a mechanism by which a system recognizes an expressive and purposeful action made by a user’s body. Hand-gesture recognition (HGR) is a staple piece of gesture-recognition literature and has been keenly researched over the past 40 years. Over this time, HGR solutions have varied in medium, method, and application. Modern developments in the areas of machine perception have seen the rise of single-camera, skeletal model, hand-gesture identification algorithms, such as media pipe hands (MPH). This paper evaluates the applicability of these modern HGR algorithms within the context of alternative control. Specifically, this is achieved through the development of an HGR-based alternative-control system capable of controlling of a quad-rotor drone. The technical importance of this paper stems from the results produced during the novel and clinically sound evaluation of MPH, alongside the investigatory framework used to develop the final HGR algorithm. The evaluation of MPH highlighted the Z-axis instability of its modelling system which reduced the landmark accuracy of its output from 86.7% to 41.5%. The selection of an appropriate classifier complimented the computationally lightweight nature of MPH whilst compensating for its instability, achieving a classification accuracy of 96.25% for eight single-hand static gestures. The success of the developed HGR algorithm ensured that the proposed alternative-control system could facilitate intuitive, computationally inexpensive, and repeatable drone control without requiring specialised equipment. Full article
Show Figures

Figure 1

17 pages, 2198 KiB  
Article
Vision-Based Efficient Robotic Manipulation with a Dual-Streaming Compact Convolutional Transformer
by Hao Guo, Meichao Song, Zhen Ding, Chunzhi Yi and Feng Jiang
Sensors 2023, 23(1), 515; https://doi.org/10.3390/s23010515 - 03 Jan 2023
Cited by 1 | Viewed by 2077
Abstract
Learning from visual observation for efficient robotic manipulation is a hitherto significant challenge in Reinforcement Learning (RL). Although the collocation of RL policies and convolution neural network (CNN) visual encoder achieves high efficiency and success rate, the method general performance for multi-tasks is [...] Read more.
Learning from visual observation for efficient robotic manipulation is a hitherto significant challenge in Reinforcement Learning (RL). Although the collocation of RL policies and convolution neural network (CNN) visual encoder achieves high efficiency and success rate, the method general performance for multi-tasks is still limited to the efficacy of the encoder. Meanwhile, the increasing cost of the encoder optimization for general performance could debilitate the efficiency advantage of the original policy. Building on the attention mechanism, we design a robotic manipulation method that significantly improves the policy general performance among multitasks with the lite Transformer based visual encoder, unsupervised learning, and data augmentation. The encoder of our method could achieve the performance of the original Transformer with much less data, ensuring efficiency in the training process and intensifying the general multi-task performances. Furthermore, we experimentally demonstrate that the master view outperforms the other alternative third-person views in the general robotic manipulation tasks when combining the third-person and egocentric views to assimilate global and local visual information. After extensively experimenting with the tasks from the OpenAI Gym Fetch environment, especially in the Push task, our method succeeds in 92% versus baselines that of 65%, 78% for the CNN encoder, 81% for the ViT encoder, and with fewer training steps. Full article
Show Figures

Figure 1

18 pages, 9221 KiB  
Article
Cross-Modal Reconstruction for Tactile Signal in Human–Robot Interaction
by Mingkai Chen and Yu Xie
Sensors 2022, 22(17), 6517; https://doi.org/10.3390/s22176517 - 29 Aug 2022
Cited by 2 | Viewed by 1404
Abstract
A human can infer the magnitude of interaction force solely based on visual information because of prior knowledge in human–robot interaction (HRI). A method of reconstructing tactile information through cross-modal signal processing is proposed in this paper. In our method, visual information is [...] Read more.
A human can infer the magnitude of interaction force solely based on visual information because of prior knowledge in human–robot interaction (HRI). A method of reconstructing tactile information through cross-modal signal processing is proposed in this paper. In our method, visual information is added as an auxiliary source to tactile information. In this case, the receiver is only able to determine the tactile interaction force from the visual information provided. In our method, we first process groups of pictures (GOPs) and treat them as the input. Secondly, we use the low-rank foreground-based attention mechanism (LAM) to detect regions of interest (ROIs). Finally, we propose a linear regression convolutional neural network (LRCNN) to infer contact force in video frames. The experimental results show that our cross-modal reconstruction is indeed feasible. Furthermore, compared to other work, our method is able to reduce the complexity of the network and improve the material identification accuracy. Full article
Show Figures

Graphical abstract

10 pages, 1254 KiB  
Article
Human Pulse Detection by a Soft Tactile Actuator
by Zixin Huang, Xinpeng Li, Jiarun Wang, Yi Zhang and Jingfu Mei
Sensors 2022, 22(13), 5047; https://doi.org/10.3390/s22135047 - 05 Jul 2022
Cited by 5 | Viewed by 1968
Abstract
Soft sensing technologies offer promising prospects in the fields of soft robots, wearable devices, and biomedical instruments. However, the structural design, fabrication process, and sensing algorithm design of the soft devices confront great difficulties. In this paper, a soft tactile actuator (STA) with [...] Read more.
Soft sensing technologies offer promising prospects in the fields of soft robots, wearable devices, and biomedical instruments. However, the structural design, fabrication process, and sensing algorithm design of the soft devices confront great difficulties. In this paper, a soft tactile actuator (STA) with both the actuation function and sensing function is presented. The tactile physiotherapy finger of the STA was fabricated by a fluid silica gel material. Before pulse detection, the tactile physiotherapy finger was actuated to the detection position by injecting compressed air into its chamber. The pulse detecting algorithm, which realized the pulse detection function of the STA, is presented. Finally, in actual pulse detection experiments, the pulse values of the volunteers detected by using the STA and by employing a professional pulse meter were close, which illustrates the effectiveness of the pulse detecting algorithm of the STA. Full article
Show Figures

Figure 1

13 pages, 5163 KiB  
Article
Estimation of Tibiofemoral Joint Contact Forces Using Foot Loads during Continuous Passive Motions
by Yunlong Yang, Huixuan Huang, Junlong Guo, Fei Yu and Yufeng Yao
Sensors 2022, 22(13), 4947; https://doi.org/10.3390/s22134947 - 30 Jun 2022
Cited by 1 | Viewed by 1564
Abstract
Continuous passive motion (CPM) machines are commonly used after various knee surgeries, but information on tibiofemoral forces (TFFs) during CPM cycles is limited. This study aimed to explore the changing trend of TFFs during CPM cycles under various ranges of motion (ROM) and [...] Read more.
Continuous passive motion (CPM) machines are commonly used after various knee surgeries, but information on tibiofemoral forces (TFFs) during CPM cycles is limited. This study aimed to explore the changing trend of TFFs during CPM cycles under various ranges of motion (ROM) and body weights (BW) by establishing a two-dimensional mathematical model. TFFs were estimated by using joint angles, foot load, and leg–foot weight. Eleven healthy male participants were tested with ROM ranging from 0° to 120°. The values of the peak TFFs during knee flexion were higher than those during knee extension, varying nonlinearly with ROM. BW had a significant main effect on the peak TFFs and tibiofemoral shear forces, while ROM had a limited effect on the peak TFFs. No significant interaction effects were observed between BW and ROM for each peak TFF, whereas a strong linear correlation existed between the peak tibiofemoral compressive forces (TFCFs) and the peak resultant TFFs (R2 = 0.971, p < 0.01). The proposed method showed promise in serving as an input for optimizing rehabilitation devices. Full article
Show Figures

Graphical abstract

19 pages, 6883 KiB  
Article
A Machine Learning Model for Predicting Sit-to-Stand Trajectories of People with and without Stroke: Towards Adaptive Robotic Assistance
by Thomas Bennett, Praveen Kumar and Virginia Ruiz Garate
Sensors 2022, 22(13), 4789; https://doi.org/10.3390/s22134789 - 24 Jun 2022
Cited by 2 | Viewed by 2487
Abstract
Sit-to-stand and stand-to-sit transfers are fundamental daily motions that enable all other types of ambulation and gait. However, the ability to perform these motions can be severely impaired by different factors, such as the occurrence of a stroke, limiting the ability to engage [...] Read more.
Sit-to-stand and stand-to-sit transfers are fundamental daily motions that enable all other types of ambulation and gait. However, the ability to perform these motions can be severely impaired by different factors, such as the occurrence of a stroke, limiting the ability to engage in other daily activities. This study presents the recording and analysis of a comprehensive database of full body biomechanics and force data captured during sit-to-stand-to-sit movements in subjects who have and have not experienced stroke. These data were then used in conjunction with simple machine learning algorithms to predict vertical motion trajectories that could be further employed for the control of an assistive robot. A total of 30 people (including 6 with stroke) each performed 20 sit-to-stand-to-sit actions at two different seat heights, from which average trajectories were created. Weighted k-nearest neighbours and linear regression models were then used on two different sets of key participant parameters (height and weight, and BMI and age), to produce a predicted trajectory. Resulting trajectories matched the true ones for non-stroke subjects with an average R2 score of 0.864±0.134 using k = 3 and 100% seat height when using height and weight parameters. Even among a small sample of stroke patients, balance and motion trends were noticed along with a large within-class variation, showing that larger scale trials need to be run to obtain significant results. The full dataset of sit-to-stand-to-sit actions for each user is made publicly available for further research. Full article
Show Figures

Figure 1

21 pages, 6852 KiB  
Article
Assessment of Pain Onset and Maximum Bearable Pain Thresholds in Physical Contact Situations
by Doyeon Han, Moonyoung Park, Junsuk Choi, Heonseop Shin, Donghwan Kim and Sungsoo Rhim
Sensors 2022, 22(8), 2996; https://doi.org/10.3390/s22082996 - 13 Apr 2022
Cited by 6 | Viewed by 2086
Abstract
With the development of robot technology, robot utilization is expanding in industrial fields and everyday life. To employ robots in various fields wherein humans and robots share the same space, human safety must be guaranteed in the event of a human–robot collision. Therefore, [...] Read more.
With the development of robot technology, robot utilization is expanding in industrial fields and everyday life. To employ robots in various fields wherein humans and robots share the same space, human safety must be guaranteed in the event of a human–robot collision. Therefore, criteria and limitations of safety need to be defined and well clarified. In this study, we induced mechanical pain in humans through quasi-static contact by an algometric device (at 29 parts of the human body). A manual apparatus was developed to induce and monitor a force and pressure. Forty healthy men participated voluntarily in the study. Physical quantities were classified based on pain onset and maximum bearable pain. The overall results derived from the trials pertained to the subjective concept of pain, which led to considerable inter-individual variation in the onset and threshold of pain. Based on the results, a quasi-static contact pain evaluation method was established, and biomechanical safety limitations on forces and pressures were formulated. The pain threshold attributed to quasi-static contact can serve as a safety standard for the robots employed. Full article
Show Figures

Figure 1

Back to TopTop