Intelligent Human–Robot Interaction: 3rd Edition

A special issue of Biomimetics (ISSN 2313-7673). This special issue belongs to the section "Locomotion and Bioinspired Robotics".

Deadline for manuscript submissions: 31 March 2025 | Viewed by 3574

Special Issue Editors


E-Mail Website
Guest Editor
School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
Interests: intelligent remanufacturing technology; robotics and automation; human-machine collaboration; optical fiber sensing and intelligent sensing technology; mechanical equipment condition monitoring and fault diagnosis
Special Issues, Collections and Topics in MDPI journals
School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan 430070, China
Interests: fiber optic sensing; robot force/position hybrid control; special robot
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Human–robot interaction (HRI) is a multi-disciplinary field that encompasses artificial intelligence, robotics, human–computer interaction, machine vision, natural language understanding, and social science. With the rapid development of AI and robotics, intelligent HRI has become an increasingly attractive issue in the field of robotics.

Intelligent HRI involves many challenges in science and technology, particularly in human-centered aspects. These include human expectations, attitudes towards, and perceptions of robots; the safety, acceptability, and comfort with robotic behaviors; and the closeness of robots to humans. On the other hand, it is desired for robots to understand the attention, intention, and even emotion of humans and make prompt corresponding responses with the support of AI. Achieving excellent intelligent HRI requires R&D in this multi- and cross-disciplinary field, with efforts expected in all relevant aspects including actuation, sensing, perception, control, recognition, planning, learning, AI algorithms, intelligent IO, integrated systems, and so on.

The aim of this Special Issue is to reveal new concepts, ideas, findings, and the latest achievements in both theoretical research and technical development in intelligent HRI. We invite scientists and engineers from robotics, AI, computer science, and other relevant disciplines to present the latest results of their research and development in the field of intelligent HRI. The topics of interest include, but are not limited to, the following:

  • Intelligent sensors and systems;
  • Bio-inspired sensing and learning;
  • Multi-modal perception and recognition;
  • Social robotics;
  • Autonomous behaviors of robots;
  • AI algorithms in robotics;
  • Collaboration between humans and robots;
  • Advances and future challenges of HRI.

Prof. Dr. Jun Huang
Dr. Ruiya Li
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Biomimetics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent sensors and systems
  • bio-inspired sensing and learning
  • multi-modal perception and recognition
  • social robotics
  • autonomous behaviors of robots
  • AI algorithms in robotics
  • collaboration between humans and robots
  • advances and future challenges of HRI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 2630 KiB  
Article
Multimodal Deep Learning Model for Cylindrical Grasp Prediction Using Surface Electromyography and Contextual Data During Reaching
by Raquel Lázaro, Margarita Vergara, Antonio Morales and Ramón A. Mollineda
Biomimetics 2025, 10(3), 145; https://doi.org/10.3390/biomimetics10030145 - 27 Feb 2025
Viewed by 168
Abstract
Grasping objects, from simple tasks to complex fine motor skills, is a key component of our daily activities. Our approach to facilitate the development of advanced prosthetics, robotic hands and human–machine interaction systems consists of collecting and combining surface electromyography (EMG) signals and [...] Read more.
Grasping objects, from simple tasks to complex fine motor skills, is a key component of our daily activities. Our approach to facilitate the development of advanced prosthetics, robotic hands and human–machine interaction systems consists of collecting and combining surface electromyography (EMG) signals and contextual data of individuals performing manipulation tasks. In this context, the identification of patterns and prediction of hand grasp types is crucial, with cylindrical grasp being one of the most common and functional. Traditional approaches to grasp prediction often rely on unimodal data sources, limiting their ability to capture the complexity of real-world scenarios. In this work, grasp prediction models that integrate both EMG signals and contextual (task- and product-related) information have been explored to improve the prediction of cylindrical grasps during reaching movements. Three model architectures are presented: an EMG processing model based on convolutions that analyzes forearm surface EMG data, a fully connected model for processing contextual information, and a hybrid architecture combining both inputs resulting in a multimodal model. The results show that context has great predictive power. Variables such as object size and weight (product-related) were found to have a greater impact on model performance than task height (task-related). Combining EMG and product context yielded better results than using each data mode separately, confirming the importance of product context in improving EMG-based models of grasping. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 3rd Edition)
Show Figures

Graphical abstract

25 pages, 1681 KiB  
Article
Multi-Modal Social Robot Behavioural Alignment and Learning Outcomes in Mediated Child–Robot Interactions
by Paul Baxter
Biomimetics 2025, 10(1), 50; https://doi.org/10.3390/biomimetics10010050 - 14 Jan 2025
Viewed by 676
Abstract
With the increasing application of robots in human-centred environments, there is increasing motivation for incorporating some degree of human-like social competences. Fields such as psychology and cognitive science not only provide guidance on the types of behaviour that could and should be exhibited [...] Read more.
With the increasing application of robots in human-centred environments, there is increasing motivation for incorporating some degree of human-like social competences. Fields such as psychology and cognitive science not only provide guidance on the types of behaviour that could and should be exhibited by the robots, they may also indicate the manner in which these behaviours can be achieved. The domain of social child–robot interaction (sCRI) provides a number of challenges and opportunities in this regard; the application to an educational context allows child-learning outcomes to be characterised as a result of robot social behaviours. One such social behaviour that is readily (and unconsciously) used by humans is behavioural alignment, in which the behaviours expressed by one person adapts to that of their interaction partner, and vice versa. In this paper, the role that robot non-verbal behavioural alignment for their interaction partner can play in the facilitation of learning outcomes for the child is examined. This behavioural alignment is facilitated by a human memory-inspired learning algorithm that adapts in real-time over the course of an interaction. A large touchscreen is employed as a mediating device between a child and a robot. Collaborative sCRI is emphasised, with the touchscreen providing a common set of interaction affordances for both child and robot. The results show that an adaptive robot is capable of engaging in behavioural alignment, and indicate that this leads to greater learning gains for the children. This study demonstrates the specific contribution that behavioural alignment makes in improving learning outcomes for children when employed by social robot interaction partners in educational contexts. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 3rd Edition)
Show Figures

Figure 1

24 pages, 6055 KiB  
Article
Analyzing the Impact of Responding to Joint Attention on the User Perception of the Robot in Human-Robot Interaction
by Jesús García-Martínez, Juan José Gamboa-Montero, José Carlos Castillo and Álvaro Castro-González
Biomimetics 2024, 9(12), 769; https://doi.org/10.3390/biomimetics9120769 - 18 Dec 2024
Viewed by 983
Abstract
The concept of joint attention holds significant importance in human interaction and is pivotal in establishing rapport, understanding, and effective communication. Within social robotics, enhancing user perception of the robot and promoting a sense of natural interaction with robots becomes a central element. [...] Read more.
The concept of joint attention holds significant importance in human interaction and is pivotal in establishing rapport, understanding, and effective communication. Within social robotics, enhancing user perception of the robot and promoting a sense of natural interaction with robots becomes a central element. In this sense, emulating human-centric qualities in social robots, such as joint attention, defined as the ability of two or more individuals to focus on a common event simultaneously, can increase their acceptability. This study analyses the impact on user perception of a responsive joint attention system integrated into a social robot within an interactive scenario. The experimental setup involves playing against the robot in the “Odds and Evens” game under two conditions: whether the joint attention system is active or inactive. Additionally, auditory and visual distractors are employed to simulate real-world distractions, aiming to test the system’s ability to capture and follow user attention effectively. To assess the influence of the joint attention system, participants completed the Robotic Social Attributes Scale (RoSAS) after each interaction. The results showed a significant improvement in user perception of the robot’s competence and warmth when the joint attention system was active. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 3rd Edition)
Show Figures

Figure 1

29 pages, 5444 KiB  
Article
Task Allocation and Sequence Planning for Human–Robot Collaborative Disassembly of End-of-Life Products Using the Bees Algorithm
by Jun Huang, Sheng Yin, Muyao Tan, Quan Liu, Ruiya Li and Duc Pham
Biomimetics 2024, 9(11), 688; https://doi.org/10.3390/biomimetics9110688 - 11 Nov 2024
Viewed by 1233
Abstract
Remanufacturing, which benefits the environment and saves resources, is attracting increasing attention. Disassembly is arguably the most critical step in the remanufacturing of end-of-life (EoL) products. Human–robot collaborative disassembly as a flexible semi-automated approach can increase productivity and relieve people of tedious, laborious, [...] Read more.
Remanufacturing, which benefits the environment and saves resources, is attracting increasing attention. Disassembly is arguably the most critical step in the remanufacturing of end-of-life (EoL) products. Human–robot collaborative disassembly as a flexible semi-automated approach can increase productivity and relieve people of tedious, laborious, and sometimes hazardous jobs. Task allocation in human–robot collaborative disassembly involves methodically assigning disassembly tasks to human operators or robots. However, the schemes for task allocation in recent studies have not been sufficiently refined and the issue of component placement after disassembly has not been fully addressed in recent studies. This paper presents a method of task allocation and sequence planning for human–robot collaborative disassembly of EoL products. The adopted criteria for human–robot disassembly task allocation are introduced. The disassembly of each component includes dismantling and placing. The performance of a disassembly plan is evaluated according to the time, cost, and utility value. A discrete Bees Algorithm using genetic operators is employed to optimise the generated human–robot collaborative disassembly solutions. The proposed task allocation and sequence planning method is validated in two case studies involving an electric motor and a power battery from an EoL vehicle. The results demonstrate the feasibility of the proposed method for planning and optimising human–robot collaborative disassembly solutions. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 3rd Edition)
Show Figures

Figure 1

Back to TopTop