BC-IRL: Learning Generalizable Reward Functions from Demonstrations

Abstract

How well do reward functions learned with inverse reinforcement learning (IRL) generalize? We illustrate that state-of-the-art IRL algorithms, which maximize a maximum-entropy objective, learn rewards that overfit to the demonstrations. Such rewards struggle to provide meaningful rewards for states not covered by the demonstrations, a major detriment when using the reward to learn policies in new situations. We introduce BC-IRL a new inverse reinforcement learning method that learns reward functions that generalize better when compared to maximum-entropy IRL approaches. In contrast to the MaxEnt framework, which learns to maximize rewards around demonstrations, BC-IRL updates reward parameters such that the policy trained with the new reward matches the expert demonstrations better. We show that BC-IRL learns rewards that generalize better on an illustrative simple task and two continuous robotic control tasks, achieving over twice the success rate of baselines in challenging generalization settings.

Cite

Text

Szot et al. "BC-IRL: Learning Generalizable Reward Functions from Demonstrations." International Conference on Learning Representations, 2023.

Markdown

[Szot et al. "BC-IRL: Learning Generalizable Reward Functions from Demonstrations." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/szot2023iclr-bcirl/)

BibTeX

@inproceedings{szot2023iclr-bcirl,
  title     = {{BC-IRL: Learning Generalizable Reward Functions from Demonstrations}},
  author    = {Szot, Andrew and Zhang, Amy and Batra, Dhruv and Kira, Zsolt and Meier, Franziska},
  booktitle = {International Conference on Learning Representations},
  year      = {2023},
  url       = {https://mlanthology.org/iclr/2023/szot2023iclr-bcirl/}
}