Rethinking Learning Dynamics in RL Using Adversarial Networks
Abstract
Recent years have seen tremendous progress in methods of reinforcement learning. However, most of these approaches have been trained in a straightforward fashion and are generally not robust to adversity, especially in the meta-RL setting. To the best of our knowledge, our work is the first to propose an adversarial training regime for Multi-Task Reinforcement Learning, which requires no manual intervention or domain knowledge of the environments. Our experiments on multiple environments in the Multi-Task Reinforcement learning domain demonstrate that the adversarial process leads to a better exploration of numerous solutions and a deeper understanding of the environment. We also adapt existing measures of causal attribution to draw insights from the skills learned, facilitating easier re-purposing of skills for adaptation to unseen environments and tasks.
Cite
Text
Kumar et al. "Rethinking Learning Dynamics in RL Using Adversarial Networks." NeurIPS 2022 Workshops: DeepRL, 2022.Markdown
[Kumar et al. "Rethinking Learning Dynamics in RL Using Adversarial Networks." NeurIPS 2022 Workshops: DeepRL, 2022.](https://mlanthology.org/neuripsw/2022/kumar2022neuripsw-rethinking/)BibTeX
@inproceedings{kumar2022neuripsw-rethinking,
title = {{Rethinking Learning Dynamics in RL Using Adversarial Networks}},
author = {Kumar, Ramnath and Deleu, Tristan and Bengio, Yoshua},
booktitle = {NeurIPS 2022 Workshops: DeepRL},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/kumar2022neuripsw-rethinking/}
}