Various emergencies occur frequently, posing threats and challenges to people’s lives and social security. In consequence, the evacuation of multi-Agent has become a significant part of the emergency response process. However, a few existing works only focus on the evacuation of a small number of agents, which does not consider the problem of multi-Agent cooperation caused by the increase of the number of agents and the impact of emergencies. Therefore, a framework for event-driven multi-Agent evacuation is proposed in this paper, which includes three parts: event collection, event sending, and task execution. During task execution, agents are divided into groups and select the leader in the group, while other agents in the group move with the leader. Then, the reinforcement learning algorithm Space Multi-Agent Deep Deterministic Policy Gradient (SMADDPG), proposed in this paper, is used for path planning. In addition, the state, action and reward based on the Markov game are designed, and an environment with emergencies is presented as agents evacuation scenario. The experiment results show that the method proposed can shorten the length of path, and improve the interoperability between multi-Agent when emergencies occur, which can provide decision-making reference for emergency departments to formulate evacuation plans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.