Regularized Inverse Reinforcement Learning

Abstract

Inverse Reinforcement Learning (IRL) aims to facilitate a learner’s ability to imitate expert behavior by acquiring reward functions that explain the expert’s decisions. Regularized IRLapplies strongly convex regularizers to the learner’s policy in order to avoid the expert’s behavior being rationalized by arbitrary constant rewards, also known as degenerate solutions. We propose tractable solutions, and practical methods to obtain them, for regularized IRL. Current methods are restricted to the maximum-entropy IRL framework, limiting them to Shannon-entropy regularizers, as well as proposing solutions that are intractable in practice. We present theoretical backing for our proposed IRL method’s applicability to both discrete and continuous controls, empirically validating our performance on a variety of tasks.

Cite

Text

Jeon et al. "Regularized Inverse Reinforcement Learning." International Conference on Learning Representations, 2021.

Markdown

[Jeon et al. "Regularized Inverse Reinforcement Learning." International Conference on Learning Representations, 2021.](https://mlanthology.org/iclr/2021/jeon2021iclr-regularized/)

BibTeX

@inproceedings{jeon2021iclr-regularized,
  title     = {{Regularized Inverse Reinforcement Learning}},
  author    = {Jeon, Wonseok and Su, Chen-Yang and Barde, Paul and Doan, Thang and Nowrouzezahrai, Derek and Pineau, Joelle},
  booktitle = {International Conference on Learning Representations},
  year      = {2021},
  url       = {https://mlanthology.org/iclr/2021/jeon2021iclr-regularized/}
}