sensors-logo

Journal Browser

Journal Browser

Robot Assistant for Human-Robot Interaction and Healthcare

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (30 June 2023) | Viewed by 11402

Special Issue Editor


E-Mail Website
Guest Editor
Jozef Stefan Institute, Ljubljana, Slovenia
Interests: robotics; humanoids; exoskeleton; nonlinear control; robot learning

Special Issue Information

Dear Colleagues,

Robots are already in use in various fields. They are no longer just machines; they are becoming collaborators. Currently, collaborative robots are mainly used in industries involving a shared workspace with humans without safety fences. In current systems, humans usually do not physically participate in the execution of the needed task. As a result, collaborative robots are rarely used outside of these industries. More meaningful human–robot collaboration, therefore, opens up the possibility for entirely new applications for human–robot interaction in the field of healthcare that were previously unfeasible. For example, robots could be used to assist users or even operate autonomously in tasks that are typically repetitive and require vigorous movements. However, the field of human–robot interaction and healthcare is not yet mature, and more research is needed to improve safety aspects and individualized solutions before it is fully suitable for clinical use.

This Special Issue focuses on breakthrough developments in the field of human–robot interaction and healthcare. This includes scientific advances in physical human–robot interaction; human intent detection, recognition, and classification; machine, imitation, reinforcement, and deep learning; and any supporting sensory systems that facilitate human–robot collaboration in an unstructured environment. Papers should address innovative solutions in these areas. Both review articles and original research papers are welcome.

Dr. Tadej Petric
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • assisting robotics
  • robots for rehabilitation and care
  • physical interaction between humans and robots
  • machine learning, deep learning, reinforcement learning, and imitation learning
  • sensors for human–robot interaction
  • human intention recognition
  • human recognition and classification
  • biomechanical and ergonomic
  • interface design
  • artificial intelligence

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 9260 KiB  
Article
Object Affordance-Based Implicit Interaction for Wheelchair-Mounted Robotic Arm Using a Laser Pointer
by Yaxin Liu, Yan Liu, Yufeng Yao and Ming Zhong
Sensors 2023, 23(9), 4477; https://doi.org/10.3390/s23094477 - 4 May 2023
Cited by 1 | Viewed by 1936
Abstract
With the growth of the world’s population, limited healthcare resources cannot provide adequate nursing services for all people in need. The wheelchair-mounted robotic arm (WMRA) with interactive technology could help to improve users’ self-care ability and relieve nursing stress. However, the users struggle [...] Read more.
With the growth of the world’s population, limited healthcare resources cannot provide adequate nursing services for all people in need. The wheelchair-mounted robotic arm (WMRA) with interactive technology could help to improve users’ self-care ability and relieve nursing stress. However, the users struggle to control the WMRA due to complex operations. To use the WMRA with less burden, this paper proposes an object affordance-based implicit interaction technology using a laser pointer. Firstly, a laser semantic identification algorithm combined with the YOLOv4 and the support vector machine (SVM) is designed to identify laser semantics. Then, an implicit action intention reasoning algorithm, based on the concept of object affordance, is explored to infer users’ intentions and learn their preferences. For the purpose of performing the actions about task intention in the scene, the dynamic movement primitives (DMP) and the finite state mechanism (FSM) are respectively used to generalize the trajectories of actions and reorder the sequence of actions in the template library. In the end, we verified the feasibility of the proposed technology on a WMRA platform. Compared with the previous method, the proposed technology can output the desired intention faster and significantly reduce the user’s limb involvement time (about 85%) in operating the WMRA under the same task. Full article
(This article belongs to the Special Issue Robot Assistant for Human-Robot Interaction and Healthcare)
Show Figures

Figure 1

29 pages, 4929 KiB  
Article
Intelligent Eye-Controlled Electric Wheelchair Based on Estimating Visual Intentions Using One-Dimensional Convolutional Neural Network and Long Short-Term Memory
by Sho Higa, Koji Yamada and Shihoko Kamisato
Sensors 2023, 23(8), 4028; https://doi.org/10.3390/s23084028 - 16 Apr 2023
Cited by 3 | Viewed by 1832
Abstract
When an electric wheelchair is operated using gaze motion, eye movements such as checking the environment and observing objects are also incorrectly recognized as input operations. This phenomenon is called the “Midas touch problem”, and classifying visual intentions is extremely important. In this [...] Read more.
When an electric wheelchair is operated using gaze motion, eye movements such as checking the environment and observing objects are also incorrectly recognized as input operations. This phenomenon is called the “Midas touch problem”, and classifying visual intentions is extremely important. In this paper, we develop a deep learning model that estimates the user’s visual intention in real time and an electric wheelchair control system that combines intention estimation and the gaze dwell time method. The proposed model consists of a 1DCNN-LSTM that estimates visual intention from feature vectors of 10 variables, such as eye movement, head movement, and distance to the fixation point. The evaluation experiments classifying four types of visual intentions show that the proposed model has the highest accuracy compared to other models. In addition, the results of the driving experiments of the electric wheelchair implementing the proposed model show that the user’s efforts to operate the wheelchair are reduced and that the operability of the wheelchair is improved compared to the traditional method. From these results, we concluded that visual intentions could be more accurately estimated by learning time series patterns from eye and head movement data. Full article
(This article belongs to the Special Issue Robot Assistant for Human-Robot Interaction and Healthcare)
Show Figures

Graphical abstract

17 pages, 3992 KiB  
Article
Classification Models of Action Research Arm Test Activities in Post-Stroke Patients Based on Human Hand Motion
by Jesus Fernando Padilla-Magaña and Esteban Peña-Pitarch
Sensors 2022, 22(23), 9078; https://doi.org/10.3390/s22239078 - 23 Nov 2022
Cited by 1 | Viewed by 1434
Abstract
The Action Research Arm Test (ARAT) presents a ceiling effect that prevents the detection of improvements produced with rehabilitation treatments in stroke patients with mild finger joint impairments. The aim of this study was to develop classification models to predict whether activities with [...] Read more.
The Action Research Arm Test (ARAT) presents a ceiling effect that prevents the detection of improvements produced with rehabilitation treatments in stroke patients with mild finger joint impairments. The aim of this study was to develop classification models to predict whether activities with similar ARAT scores were performed by a healthy subject or by a subject post-stroke using the extension and flexion angles of 11 finger joints as features. For this purpose, we used three algorithms: Support Vector Machine (SVM), Random Forest (RF), and K-Nearest Neighbors (KNN). The dataset presented class imbalance, and the classification models presented a low recall, especially in the stroke class. Therefore, we implemented class balance using Borderline-SMOTE. After data balancing the classification models showed significantly higher accuracy, recall, f1-score, and AUC. However, after data balancing, the SVM classifier showed a higher performance with a precision of 98%, a recall of 97.5%, and an AUC of 0.996. The results showed that classification models based on human hand motion features in combination with the oversampling algorithm Borderline-SMOTE achieve higher performance. Furthermore, our study suggests that there are differences in ARAT activities performed between healthy and post-stroke individuals that are not detected by the ARAT scoring process. Full article
(This article belongs to the Special Issue Robot Assistant for Human-Robot Interaction and Healthcare)
Show Figures

Figure 1

27 pages, 95183 KiB  
Article
HRpI System Based on Wavenet Controller with Human Cooperative-in-the-Loop for Neurorehabilitation Purposes
by Juan Daniel Ramirez-Zamora, Omar Arturo Dominguez-Ramirez, Luis Enrique Ramos-Velasco, Gabriel Sepulveda-Cervantes, Vicente Parra-Vega, Alejandro Jarillo-Silva and Eduardo Alejandro Escotto-Cordova
Sensors 2022, 22(20), 7729; https://doi.org/10.3390/s22207729 - 12 Oct 2022
Cited by 2 | Viewed by 1783
Abstract
There exist several methods aimed at human–robot physical interaction (HRpI) to provide physical therapy in patients. The use of haptics has become an option to display forces along a given path so as to it guides the physiotherapist protocol. Critical in this regard [...] Read more.
There exist several methods aimed at human–robot physical interaction (HRpI) to provide physical therapy in patients. The use of haptics has become an option to display forces along a given path so as to it guides the physiotherapist protocol. Critical in this regard is the motion control for haptic guidance to convey the specifications of the clinical protocol. Given the inherent patient variability, a conclusive demand of these HRpI methods is the need to modify online its response with neither rejecting nor neglecting interaction forces but to process them as patient interaction. In this paper, considering the nonlinear dynamics of the robot interacting bilaterally with a patient, we propose a novel adaptive control to guarantee stable haptic guidance by processing the causality of patient interaction forces, despite unknown robot dynamics and uncertainties. The controller implements radial basis neural network with daughter RASP1 wavelets activation function to identify the coupled interaction dynamics. For an efficient online implementation, an output infinite impulse response filter prunes negligible signals and nodes to deal with overparametrization. This contributes to adapt online the feedback gains of a globally stable discrete PID regulator to yield stiffness control, so the user is guided within a perceptual force field. Effectiveness of the proposed method is verified in real-time bimanual human-in-the-loop experiments. Full article
(This article belongs to the Special Issue Robot Assistant for Human-Robot Interaction and Healthcare)
Show Figures

Figure 1

11 pages, 2834 KiB  
Article
Characterization of the Workspace and Limits of Operation of Laser Treatments for Vascular Lesions of the Lower Limbs
by Bruno Oliveira, Pedro Morais, Helena R. Torres, António L. Baptista, Jaime C. Fonseca and João L. Vilaça
Sensors 2022, 22(19), 7481; https://doi.org/10.3390/s22197481 - 2 Oct 2022
Cited by 4 | Viewed by 1693
Abstract
The increase of the aging population brings numerous challenges to health and aesthetic segments. Here, the use of laser therapy for dermatology is expected to increase since it allows for non-invasive and infection-free treatments. However, existing laser devices require doctors’ manually handling and [...] Read more.
The increase of the aging population brings numerous challenges to health and aesthetic segments. Here, the use of laser therapy for dermatology is expected to increase since it allows for non-invasive and infection-free treatments. However, existing laser devices require doctors’ manually handling and visually inspecting the skin. As such, the treatment outcome is dependent on the user’s expertise, which frequently results in ineffective treatments and side effects. This study aims to determine the workspace and limits of operation of laser treatments for vascular lesions of the lower limbs. The results of this study can be used to develop a robotic-guided technology to help address the aforementioned problems. Specifically, workspace and limits of operation were studied in eight vascular laser treatments. For it, an electromagnetic tracking system was used to collect the real-time positioning of the laser during the treatments. The computed average workspace length, height, and width were 0.84 ± 0.15, 0.41 ± 0.06, and 0.78 ± 0.16 m, respectively. This corresponds to an average volume of treatment of 0.277 ± 0.093 m3. The average treatment time was 23.2 ± 10.2 min, with an average laser orientation of 40.6 ± 5.6 degrees. Additionally, the average velocities of 0.124 ± 0.103 m/s and 31.5 + 25.4 deg/s were measured. This knowledge characterizes the vascular laser treatment workspace and limits of operation, which may ease the understanding for future robotic system development. Full article
(This article belongs to the Special Issue Robot Assistant for Human-Robot Interaction and Healthcare)
Show Figures

Figure 1

19 pages, 3779 KiB  
Article
Zero Moment Line—Universal Stability Parameter for Multi-Contact Systems in Three Dimensions
by Tilen Brecelj and Tadej Petrič
Sensors 2022, 22(15), 5656; https://doi.org/10.3390/s22155656 - 28 Jul 2022
Cited by 4 | Viewed by 1722
Abstract
The widely used stability parameter, the zero moment point (ZMP), which is usually defined on the ground, is redefined, in this paper, in two different ways to acquire a more general form that allows its application to systems that are not supported only [...] Read more.
The widely used stability parameter, the zero moment point (ZMP), which is usually defined on the ground, is redefined, in this paper, in two different ways to acquire a more general form that allows its application to systems that are not supported only on the ground, and therefore, their support polygon does not extend only on the floor. This way it allows to determine the stability of humanoid and other floating-based robots that are interacting with the environment at arbitrary heights. In the first redefinition, the ZMP is represented as a line containing all possible ZMPs, called the zero moment line (ZML), while in the second redefinition, the ZMP is represented as the ZMP angle, i.e., the angle between the ZML and the vertical line, passing through the center of mass (COM) of the investigated system. The first redefinition is useful in situations when the external forces and their acting locations are known, while the second redefinition can be applied in situations when the COM of the system under study is known and can be tracked. The first redefinition of the ZMP is also applied to two different measurements performed with two force plates, two force sensors, and the Optitrack system. In the first measurement, a subject stands up from a bench and sits down while being pulled by its hands, while in the second measurement, two subjects stand still, hold on to two double handles, and lean backward. In both cases, the stability of the subjects involved in the measurements is investigated and discussed. Full article
(This article belongs to the Special Issue Robot Assistant for Human-Robot Interaction and Healthcare)
Show Figures

Figure 1

Back to TopTop