Dynamic Noises of Multi-Agent Environments Can Improve Generalization: Agent-Based Models Meets Reinforcement Learning
Abstract
We study the benefits of reinforcement learning (RL) environments based on agent-based models (ABM). While ABMs are known to offer microfoundational simulations at the cost of computational complexity, we empirically show in this work that their non-deterministic dynamics can improve the generalization of RL agents. To this end, we examine the control of an epidemic SIR environments based on either differential equations or ABMs. Numerical simulations demonstrate that the intrinsic noise in the ABM-based dynamics of the SIR model not only improve the average reward but also allow the RL agent to generalize on a wider ranges of epidemic parameters.
Cite
Text
Akrout et al. "Dynamic Noises of Multi-Agent Environments Can Improve Generalization: Agent-Based Models Meets Reinforcement Learning." ICLR 2022 Workshops: GMS, 2022.Markdown
[Akrout et al. "Dynamic Noises of Multi-Agent Environments Can Improve Generalization: Agent-Based Models Meets Reinforcement Learning." ICLR 2022 Workshops: GMS, 2022.](https://mlanthology.org/iclrw/2022/akrout2022iclrw-dynamic/)BibTeX
@inproceedings{akrout2022iclrw-dynamic,
title = {{Dynamic Noises of Multi-Agent Environments Can Improve Generalization: Agent-Based Models Meets Reinforcement Learning}},
author = {Akrout, Mohamed and Feriani, Amal and McLeod, Bob},
booktitle = {ICLR 2022 Workshops: GMS},
year = {2022},
url = {https://mlanthology.org/iclrw/2022/akrout2022iclrw-dynamic/}
}