Next Article in Journal
Design of a Spherical Rover Driven by Pendulum and Control Moment Gyroscope for Planetary Exploration
Next Article in Special Issue
Learning to Walk with Adaptive Feet
Previous Article in Journal
Multiple-Object Grasping Using a Multiple-Suction-Cup Vacuum Gripper in Cluttered Scenes
Previous Article in Special Issue
A Control Interface for Autonomous Positioning of Magnetically Actuated Spheres Using an Artificial Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Learning Advanced Locomotion for Quadrupedal Robots: A Distributed Multi-Agent Reinforcement Learning Framework with Riemannian Motion Policies

1
Intelligent and Mechanical Interaction System, University of Tsukuba, Tsukuba 305-8577, Ibaraki, Japan
2
Computer Vision Research Team, Artificial Intelligence Research Center, The National Institute of Advanced Industrial Science and Technology, 1-1-1 Umezono, Tsukuba 305-8560, Ibaraki, Japan
*
Author to whom correspondence should be addressed.
Robotics 2024, 13(6), 86; https://doi.org/10.3390/robotics13060086
Submission received: 31 March 2024 / Revised: 16 May 2024 / Accepted: 20 May 2024 / Published: 28 May 2024
(This article belongs to the Special Issue Applications of Neural Networks in Robot Control)

Abstract

Recent advancements in quadrupedal robotics have explored the motor potential of these machines beyond simple walking, enabling highly dynamic skills such as jumping, backflips, and even bipedal locomotion. While reinforcement learning has demonstrated excellent performance in this domain, it often relies on complex reward function tuning and prolonged training times, and the interpretability is not satisfactory. Riemannian motion policies, a reactive control method, excel in handling highly dynamic systems but are generally limited to fully actuated systems, making their application to underactuated quadrupedal robots challenging. To address these limitations, we propose a novel framework that treats each leg of a quadrupedal robot as an intelligent agent and employs multi-agent reinforcement learning to coordinate the motion of all four legs. This decomposition satisfies the conditions for utilizing Riemannian motion policies and eliminates the need for complex reward functions, simplifying the learning process for high-level motion modalities. Our simulation experiments demonstrate that the proposed method enables quadrupedal robots to learn stable locomotion using three, two, or even a single leg, offering advantages in training speed, success rate, and stability compared to traditional approaches, and better interpretability. This research explores the possibility of developing more efficient and adaptable control policies for quadrupedal robots.
Keywords: multi-agent reinforcement learning; Riemannian motion policies; motion control; robot learning; quadrupedal robotics multi-agent reinforcement learning; Riemannian motion policies; motion control; robot learning; quadrupedal robotics

Share and Cite

MDPI and ACS Style

Wang, Y.; Sagawa, R.; Yoshiyasu, Y. Learning Advanced Locomotion for Quadrupedal Robots: A Distributed Multi-Agent Reinforcement Learning Framework with Riemannian Motion Policies. Robotics 2024, 13, 86. https://doi.org/10.3390/robotics13060086

AMA Style

Wang Y, Sagawa R, Yoshiyasu Y. Learning Advanced Locomotion for Quadrupedal Robots: A Distributed Multi-Agent Reinforcement Learning Framework with Riemannian Motion Policies. Robotics. 2024; 13(6):86. https://doi.org/10.3390/robotics13060086

Chicago/Turabian Style

Wang, Yuliu, Ryusuke Sagawa, and Yusuke Yoshiyasu. 2024. "Learning Advanced Locomotion for Quadrupedal Robots: A Distributed Multi-Agent Reinforcement Learning Framework with Riemannian Motion Policies" Robotics 13, no. 6: 86. https://doi.org/10.3390/robotics13060086

APA Style

Wang, Y., Sagawa, R., & Yoshiyasu, Y. (2024). Learning Advanced Locomotion for Quadrupedal Robots: A Distributed Multi-Agent Reinforcement Learning Framework with Riemannian Motion Policies. Robotics, 13(6), 86. https://doi.org/10.3390/robotics13060086

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop