Decentralized Control of Quadrotor Swarms with End-to-End Deep Reinforcement Learning
Abstract
We demonstrate the possibility of learning drone swarm controllers that are zero-shot transferable to real quadrotors via large-scale multi-agent end-to-end reinforcement learning. We train policies parameterized by neural networks that are capable of controlling individual drones in a swarm in a fully decentralized manner. Our policies, trained in simulated environments with realistic quadrotor physics, demonstrate advanced flocking behaviors, perform aggressive maneuvers in tight formations while avoiding collisions with each other, break and re-establish formations to avoid collisions with moving obstacles, and efficiently coordinate in pursuit-evasion tasks. We analyze, in simulation, how different model architectures and parameters of the training regime influence the final performance of neural swarms. We demonstrate the successful deployment of the model learned in simulation to highly resource-constrained physical quadrotors performing station keeping and goal swapping behaviors. Video demonstrations and source code are available at the project website https://sites.google.com/view/swarm-rl.
Cite
Text
Batra et al. "Decentralized Control of Quadrotor Swarms with End-to-End Deep Reinforcement Learning." Conference on Robot Learning, 2021.Markdown
[Batra et al. "Decentralized Control of Quadrotor Swarms with End-to-End Deep Reinforcement Learning." Conference on Robot Learning, 2021.](https://mlanthology.org/corl/2021/batra2021corl-decentralized/)BibTeX
@inproceedings{batra2021corl-decentralized,
title = {{Decentralized Control of Quadrotor Swarms with End-to-End Deep Reinforcement Learning}},
author = {Batra, Sumeet and Huang, Zhehui and Petrenko, Aleksei and Kumar, Tushar and Molchanov, Artem and Sukhatme, Gaurav S.},
booktitle = {Conference on Robot Learning},
year = {2021},
pages = {576-586},
volume = {164},
url = {https://mlanthology.org/corl/2021/batra2021corl-decentralized/}
}