Mean Field Games Flock! the Reinforcement Learning Way
Abstract
We present a method enabling a large number of agents to learn how to flock. This problem has drawn a lot of interest but requires many structural assumptions and is tractable only in small dimensions. We phrase this problem as a Mean Field Game (MFG), where each individual chooses its own acceleration depending on the population behavior. Combining Deep Reinforcement Learning (RL) and Normalizing Flows (NF), we obtain a tractable solution requiring only very weak assumptions. Our algorithm finds a Nash Equilibrium and the agents adapt their velocity to match the neighboring flock’s average one. We use Fictitious Play and alternate: (1) computing an approximate best response with Deep RL, and (2) estimating the next population distribution with NF. We show numerically that our algorithm can learn multi-group or high-dimensional flocking with obstacles.
Cite
Text
Perrin et al. "Mean Field Games Flock! the Reinforcement Learning Way." International Joint Conference on Artificial Intelligence, 2021. doi:10.24963/IJCAI.2021/50Markdown
[Perrin et al. "Mean Field Games Flock! the Reinforcement Learning Way." International Joint Conference on Artificial Intelligence, 2021.](https://mlanthology.org/ijcai/2021/perrin2021ijcai-mean/) doi:10.24963/IJCAI.2021/50BibTeX
@inproceedings{perrin2021ijcai-mean,
title = {{Mean Field Games Flock! the Reinforcement Learning Way}},
author = {Perrin, Sarah and Laurière, Mathieu and Pérolat, Julien and Geist, Matthieu and Élie, Romuald and Pietquin, Olivier},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2021},
pages = {356-362},
doi = {10.24963/IJCAI.2021/50},
url = {https://mlanthology.org/ijcai/2021/perrin2021ijcai-mean/}
}