Assistive Robotic Navigation Using Deep Reinforcement

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (20 February 2024) | Viewed by 2262

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
Interests: swarm intelligence; multi-agent reinforcement learning; quantum artificial intelligence

E-Mail Website
Guest Editor
School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China
Interests: swarm intelligence; multi-agent reinforcement learning; explainable machine learning

Special Issue Information

Dear Colleagues,

Assistive robotics aims to provide assistance to individuals with disabilities or limitations in their daily lives, allowing them to perform tasks more independently and efficiently. Navigation is a fundamental aspect of assistive robotics, as it involves the robot's ability to perceive and interpret its surroundings, plan optimal paths, and interact safely with the environment.

Deep reinforcement learning, a subfield of machine learning, plays a crucial role in enhancing the navigation capabilities of assistive robots. It combines deep neural networks with reinforcement learning algorithms to enable robots to learn from their interactions with the environment and make intelligent decisions in real time. By using deep reinforcement learning, assistive robots can acquire navigation skills, adapt to dynamic environments, and respond to user needs effectively.

The application of deep reinforcement learning in assistive robotics navigation has the potential to revolutionize various domains, including healthcare, rehabilitation, elderly care, and personal assistance. It opens up possibilities for robots to assist individuals in tasks such as mobility support, object retrieval, environmental exploration, and obstacle avoidance. Moreover, advancements in this field can lead to the development of more intuitive and user-friendly human–robot interfaces, fostering natural and seamless interactions between robots and users.

Dr. Cheng Xu
Dr. Shihong Duan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • assistive robotics
  • navigation
  • deep reinforcement learning
  • autonomous robots
  • machine learning
  • human–robot interaction
  • explainable reinforcement learning
  • swarm robots
  • multi-agent reinforcement learning
  • assistive devices

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 5452 KiB  
Article
Risk-Aware Deep Reinforcement Learning for Robot Crowd Navigation
by Xueying Sun, Qiang Zhang, Yifei Wei and Mingmin Liu
Electronics 2023, 12(23), 4744; https://doi.org/10.3390/electronics12234744 - 22 Nov 2023
Cited by 3 | Viewed by 1841
Abstract
Ensuring safe and efficient navigation in crowded environments is a critical goal for assistive robots. Recent studies have emphasized the potential of deep reinforcement learning techniques to enhance robots’ navigation capabilities in the presence of crowds. However, current deep reinforcement learning methods often [...] Read more.
Ensuring safe and efficient navigation in crowded environments is a critical goal for assistive robots. Recent studies have emphasized the potential of deep reinforcement learning techniques to enhance robots’ navigation capabilities in the presence of crowds. However, current deep reinforcement learning methods often face the challenge of robots freezing as crowd density increases. To address this issue, a novel risk-aware deep reinforcement learning approach is proposed in this paper. The proposed method integrates a risk function to assess the probability of collision between the robot and pedestrians, enabling the robot to proactively prioritize pedestrians with a higher risk of collision. Furthermore, the model dynamically adjusts the fusion strategy of learning-based and risk-aware-based features, thereby improving the robustness of robot navigation. Evaluations were conducted to determine the effectiveness of the proposed method in both low- and high-crowd density settings. The results exhibited remarkable navigation success rates of 98.0% and 93.2% in environments with 10 and 20 pedestrians, respectively. These findings emphasize the robust performance of the proposed method in successfully navigating through crowded spaces. Additionally, the approach achieves navigation times comparable to those of state-of-the-art methods, confirming its efficiency in accomplishing navigation tasks. The generalization capability of the method was also rigorously assessed by subjecting it to testing in crowd environments exceeding the training density. Notably, the proposed method attains an impressive navigation success rate of 90.0% in 25-person environments, surpassing the performance of existing approaches and establishing itself as a state-of-the-art solution. This result highlights the versatility and effectiveness of the proposed method in adapting to various crowd densities and further reinforces its applicability in real-world scenarios. Full article
(This article belongs to the Special Issue Assistive Robotic Navigation Using Deep Reinforcement)
Show Figures

Figure 1

Back to TopTop