Next Issue
Volume 2, December
Previous Issue
Volume 2, June
 
 

Robotics, Volume 2, Issue 3 (September 2013) – 4 articles , Pages 122-186

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
59 KiB  
Editorial
Special Issue on Intelligent Robots
by Genci Capi
Robotics 2013, 2(3), 185-186; https://doi.org/10.3390/robotics2030185 - 06 Aug 2013
Viewed by 5796
Abstract
The research on intelligent robots will produce robots that are able to operate in everyday life environments, to adapt their program according to environment changes, and to cooperate with other team members and humans. Operating in human environments, robots need to process, in [...] Read more.
The research on intelligent robots will produce robots that are able to operate in everyday life environments, to adapt their program according to environment changes, and to cooperate with other team members and humans. Operating in human environments, robots need to process, in real time, a large amount of sensory data—such as vision, laser, microphone—in order to determine the best action. Intelligent algorithms have been successfully applied to link complex sensory data to robot action. This editorial briefly summarizes recent findings in the field of intelligent robots as described in the articles published in this special issue. Full article
(This article belongs to the Special Issue Intelligent Robots)
4806 KiB  
Article
Hormone-Inspired Behaviour Switching for the Control of Collective Robotic Organisms
by Tüze Kuyucu, Ivan Tanev and Katsunori Shimohara
Robotics 2013, 2(3), 165-184; https://doi.org/10.3390/robotics2030165 - 24 Jul 2013
Cited by 4 | Viewed by 7825
Abstract
Swarming and modular robotic locomotion are two disconnected behaviours that a group of small homogeneous robots can be used to achieve. The use of these two behaviours is a popular subject in robotics research involving search, rescue and exploration. However, they are rarely [...] Read more.
Swarming and modular robotic locomotion are two disconnected behaviours that a group of small homogeneous robots can be used to achieve. The use of these two behaviours is a popular subject in robotics research involving search, rescue and exploration. However, they are rarely addressed as two behaviours that can coexist within a single robotic system. Here, we present a bio-inspired decision mechanism, which provides a convenient way for evolution to configure the conditions and timing of behaving as a swarm or a modular robot in an exploration scenario. The decision mechanism switches among two behaviours that are previously developed (a pheromone-based swarm control and a sinusoidal rectilinear modular robot movement). We use Genetic Programming (GP) to evolve the controller for these decisions, which acts without a centralized mechanism and with limited inter-robot communication. The results show that the proposed bio-inspired decision mechanism provides an evolvable medium for the GP to utilize in evolving an effective decision-making mechanism. Full article
(This article belongs to the Special Issue Intelligent Robots)
Show Figures

Graphical abstract

431 KiB  
Article
An Improved Reinforcement Learning System Using Affective Factors
by Takashi Kuremoto, Tetsuya Tsurusaki, Kunikazu Kobayashi, Shingo Mabu and Masanao Obayashi
Robotics 2013, 2(3), 149-164; https://doi.org/10.3390/robotics2030149 - 10 Jul 2013
Cited by 8 | Viewed by 9396
Abstract
As a powerful and intelligent machine learning method, reinforcement learning (RL) has been widely used in many fields such as game theory, adaptive control, multi-agent system, nonlinear forecasting, and so on. The main contribution of this technique is its exploration and exploitation approaches [...] Read more.
As a powerful and intelligent machine learning method, reinforcement learning (RL) has been widely used in many fields such as game theory, adaptive control, multi-agent system, nonlinear forecasting, and so on. The main contribution of this technique is its exploration and exploitation approaches to find the optimal solution or semi-optimal solution of goal-directed problems. However, when RL is applied to multi-agent systems (MASs), problems such as “curse of dimension”, “perceptual aliasing problem”, and uncertainty of the environment constitute high hurdles to RL. Meanwhile, although RL is inspired by behavioral psychology and reward/punishment from the environment is used, higher mental factors such as affects, emotions, and motivations are rarely adopted in the learning procedure of RL. In this paper, to challenge agents learning in MASs, we propose a computational motivation function, which adopts two principle affective factors “Arousal” and “Pleasure” of Russell’s circumplex model of affects, to improve the learning performance of a conventional RL algorithm named Q-learning (QL). Compared with the conventional QL, computer simulations of pursuit problems with static and dynamic preys were carried out, and the results showed that the proposed method results in agents having a faster and more stable learning performance. Full article
(This article belongs to the Special Issue Intelligent Robots)
Show Figures

Figure 1

1941 KiB  
Article
Reinforcement Learning in Robotics: Applications and Real-World Challenges
by Petar Kormushev, Sylvain Calinon and Darwin G. Caldwell
Robotics 2013, 2(3), 122-148; https://doi.org/10.3390/robotics2030122 - 05 Jul 2013
Cited by 170 | Viewed by 37235
Abstract
In robotics, the ultimate goal of reinforcement learning is to endow robots with the ability to learn, improve, adapt and reproduce tasks with dynamically changing constraints based on exploration and autonomous learning. We give a summary of the state-of-the-art of reinforcement learning in [...] Read more.
In robotics, the ultimate goal of reinforcement learning is to endow robots with the ability to learn, improve, adapt and reproduce tasks with dynamically changing constraints based on exploration and autonomous learning. We give a summary of the state-of-the-art of reinforcement learning in the context of robotics, in terms of both algorithms and policy representations. Numerous challenges faced by the policy representation in robotics are identified. Three recent examples for the application of reinforcement learning to real-world robots are described: a pancake flipping task, a bipedal walking energy minimization task and an archery-based aiming task. In all examples, a state-of-the-art expectation-maximization-based reinforcement learning is used, and different policy representations are proposed and evaluated for each task. The proposed policy representations offer viable solutions to six rarely-addressed challenges in policy representations: correlations, adaptability, multi-resolution, globality, multi-dimensionality and convergence. Both the successes and the practical difficulties encountered in these examples are discussed. Based on insights from these particular cases, conclusions are drawn about the state-of-the-art and the future perspective directions for reinforcement learning in robotics. Full article
(This article belongs to the Special Issue Intelligent Robots)
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop