Relative Entropy Inverse Reinforcement Learning

Abstract

We consider the problem of imitation learning where the examples, demonstrated by an expert, cover only a small part of a large state space. Inverse Reinforcement Learning (IRL) provides an efficient tool for generalizing the demonstration, based on the assumption that the expert is optimally acting in a Markov Decision Process (MDP). Most of the past work on IRL requires that a (near)-optimal policy can be computed for different reward functions. However, this requirement can hardly be satisfied in systems with a large, or continuous, state space. In this paper, we propose a model-free IRL algorithm, where the relative entropy between the empirical distribution of the state-action trajectories under a baseline policy and their distribution under the learned policy is minimized by stochastic gradient descent. We compare this new approach to well-known IRL algorithms using learned MDP models. Empirical results on simulated car racing, gridworld and ball-in-a-cup problems show that our approach is able to learn good policies from a small number of demonstrations.

Cite

Text

Boularias et al. "Relative Entropy Inverse Reinforcement Learning." Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011.

Markdown

[Boularias et al. "Relative Entropy Inverse Reinforcement Learning." Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011.](https://mlanthology.org/aistats/2011/boularias2011aistats-relative/)

BibTeX

@inproceedings{boularias2011aistats-relative,
  title     = {{Relative Entropy Inverse Reinforcement Learning}},
  author    = {Boularias, Abdeslam and Kober, Jens and Peters, Jan},
  booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics},
  year      = {2011},
  pages     = {182-189},
  volume    = {15},
  url       = {https://mlanthology.org/aistats/2011/boularias2011aistats-relative/}
}