Robustness to Multi-Modal Environment Uncertainty in MARL Using Curriculum Learning
Abstract
Multi-agent reinforcement learning (MARL) plays a pivotal role in tackling real-world challenges. However, the seamless transition of trained policies from simulations to real-world requires it to be robust to various environmental uncertainties. Existing works focus on finding Nash Equilibrium or the optimal policy under uncertainty in a single environment variable (i.e. action, state or reward). This is because a multi-agent system is highly complex and non-stationary. However, in a real-world setting, uncertainty can occur in multiple environment variables simultaneously. This work is the first to formulate the generalised problem of robustness to multi-modal environment uncertainty in MARL. To this end, we propose a general robust training approach for multi-modal uncertainty based on curriculum learning techniques. We handle environmental uncertainty in more than one variable simultaneously and present extensive results across both cooperative and competitive MARL environments, demonstrating that our approach achieves state-of-the-art robustness on three multi-particle environment tasks (Cooperative-Navigation, Keep-Away, Physical Deception).
Cite
Text
Agrawal et al. "Robustness to Multi-Modal Environment Uncertainty in MARL Using Curriculum Learning." NeurIPS 2023 Workshops: MASEC, 2023.Markdown
[Agrawal et al. "Robustness to Multi-Modal Environment Uncertainty in MARL Using Curriculum Learning." NeurIPS 2023 Workshops: MASEC, 2023.](https://mlanthology.org/neuripsw/2023/agrawal2023neuripsw-robustness/)BibTeX
@inproceedings{agrawal2023neuripsw-robustness,
title = {{Robustness to Multi-Modal Environment Uncertainty in MARL Using Curriculum Learning}},
author = {Agrawal, Aakriti and Aralikatti, Rohith and Sun, Yanchao and Huang, Furong},
booktitle = {NeurIPS 2023 Workshops: MASEC},
year = {2023},
url = {https://mlanthology.org/neuripsw/2023/agrawal2023neuripsw-robustness/}
}