Goal Misgeneralization in Deep Reinforcement Learning

Abstract

We study goal misgeneralization, a type of out-of-distribution robustness failure in reinforcement learning (RL). Goal misgeneralization occurs when an RL agent retains its capabilities out-of-distribution yet pursues the wrong goal. For instance, an agent might continue to competently avoid obstacles, but navigate to the wrong place. In contrast, previous works have typically focused on capability generalization failures, where an agent fails to do anything sensible at test time.We provide the first explicit empirical demonstrations of goal misgeneralization and present a partial characterization of its causes.

Cite

Text

Di Langosco et al. "Goal Misgeneralization in Deep Reinforcement Learning." International Conference on Machine Learning, 2022.

Markdown

[Di Langosco et al. "Goal Misgeneralization in Deep Reinforcement Learning." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/langosco2022icml-goal/)

BibTeX

@inproceedings{langosco2022icml-goal,
  title     = {{Goal Misgeneralization in Deep Reinforcement Learning}},
  author    = {Di Langosco, Lauro Langosco and Koch, Jack and Sharkey, Lee D and Pfau, Jacob and Krueger, David},
  booktitle = {International Conference on Machine Learning},
  year      = {2022},
  pages     = {12004-12019},
  volume    = {162},
  url       = {https://mlanthology.org/icml/2022/langosco2022icml-goal/}
}