Recent Advances in Multi-Agent System

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Systems & Control Engineering".

Deadline for manuscript submissions: closed (15 July 2023) | Viewed by 2728

Special Issue Editors


E-Mail Website
Guest Editor
School of Automation, Hangzhou Dianzi University, Hangzhou 310005, China
Interests: multirobot systems; swarm intelligence; intelligent control; reinforcement learning
School of Automation, Hangzhou Dianzi University, Hangzhou 310005, China
Interests: multiagent systems; unmanned systems; networked control; distributed coordination control
School of Automation, Hangzhou Dianzi University, Hangzhou 310005, China
Interests: multiagent systems; networked control; distributed coordination control; event-triggering control

E-Mail Website
Guest Editor
School of Automation, Hangzhou Dianzi University, Hangzhou 310005, China
Interests: intelligent robot; intelligent control; image processing; swarm intelligence

Special Issue Information

Dear Colleagues,

With the rapid development of perception, communication, and computation technologies, the distributed cooperative control of multiagent systems has received much attention from researchers and engineers in the last decade due to their wide applications in large-scale process industries, multirobot systems, intelligent transportation systems, sensor networks, smart grids, and internet systems. Compared with single-agent variants, multiagent systems have some characteristics, such as wide sensing coverage, multiple detection information, and mobility. Hence, various cooperative control approaches, such as fixed-time control approach, finite-time control approach, reinforcement learning control approach, coverage control, and event-triggering control, to name a few, have been proposed to control multiagent systems. In this Special Issue, we are particularly interested in developing new cooperative control approaches for multiagent systems. Simultaneously, we also welcome papers on real-world applications such as signal source localization, environmental monitoring, magnetic map construction, surveillance search, and so on. We look forward to receiving your contributions.

Prof. Dr. Qiang Lv
Dr. Bo Wang
Dr. Na Huang
Dr. Botao Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • finite-time cooperative control
  • fixed-time cooperative control
  • event-triggering control
  • converage control
  • formation control
  • consensus control with constraints
  • reinforcement learning for multiagent systems
  • real-world applications

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 8411 KiB  
Article
A Distributed Conflict-Free Task Allocation Method for Multi-AGV Systems
by Qiang Guo, Haiyan Yao, Yi Liu, Zhipeng Tang, Xufeng Zhang and Ning Li
Electronics 2023, 12(18), 3877; https://doi.org/10.3390/electronics12183877 - 14 Sep 2023
Viewed by 871
Abstract
In the era of Industry 4.0, as the main force of intelligent logistics systems, multi-Automated Guided Vehicle (AGV) systems have developed rapidly. At present, multi-AGV systems are a research hotspot, where task allocation as a key technology is being paid much attention. In [...] Read more.
In the era of Industry 4.0, as the main force of intelligent logistics systems, multi-Automated Guided Vehicle (AGV) systems have developed rapidly. At present, multi-AGV systems are a research hotspot, where task allocation as a key technology is being paid much attention. In this study, a new task allocation scheme for multi-AGV systems is proposed based on a distributed framework. The AGVs can autonomously select tasks, plan paths, and communicate with its neighbors to ensure that all tasks are completed at a low cost and conflicts are avoided. While ensuring total connectivity, the proposed method can avoid the calculation pressure of task center surges when the number of AGVs increases sharply, and has the advantages of good flexibility and good real-time performance. In addition, some examples are provided to demonstrate the effectiveness of the connectivity maintainer and task allocation method. Full article
(This article belongs to the Special Issue Recent Advances in Multi-Agent System)
Show Figures

Figure 1

19 pages, 1398 KiB  
Article
GR(1)-Guided Deep Reinforcement Learning for Multi-Task Motion Planning under a Stochastic Environment
by Chenyang Zhu, Yujie Cai, Jinyu Zhu, Can Hu and Jia Bi
Electronics 2022, 11(22), 3716; https://doi.org/10.3390/electronics11223716 - 13 Nov 2022
Cited by 5 | Viewed by 1401
Abstract
Motion planning has been used in robotics research to make movement decisions under certain movement constraints. Deep Reinforcement Learning (DRL) approaches have been applied to the cases of motion planning with continuous state representations. However, current DRL approaches suffer from reward sparsity and [...] Read more.
Motion planning has been used in robotics research to make movement decisions under certain movement constraints. Deep Reinforcement Learning (DRL) approaches have been applied to the cases of motion planning with continuous state representations. However, current DRL approaches suffer from reward sparsity and overestimation issues. It is also challenging to train the agents to deal with complex task specifications under deep neural network approximations. This paper considers one of the fragments of Linear Temporal Logic (LTL), Generalized Reactivity of rank 1 (GR(1)), as a high-level reactive temporal logic to guide robots in learning efficient movement strategies under a stochastic environment. We first use the synthesized strategy of GR(1) to construct a potential-based reward machine, to which we save the experiences per state. We integrate GR(1) with DQN, double DQN and dueling double DQN. We also observe that the synthesized strategies of GR(1) could be in the form of directed cyclic graphs. We develop a topological-sort-based reward-shaping approach to calculate the potential values of the reward machine, based on which we use the dueling architecture on the double deep Q-network with the experiences to train the agents. Experiments on multi-task learning show that the proposed approach outperforms the state-of-art algorithms in learning rate and optimal rewards. In addition, compared with the value-iteration-based reward-shaping approaches, our topological-sort-based reward-shaping approach has a higher accumulated reward compared with the cases where the synthesized strategies are in the form of directed cyclic graphs. Full article
(This article belongs to the Special Issue Recent Advances in Multi-Agent System)
Show Figures

Figure 1

Back to TopTop