Active Learning for Reward Estimation in Inverse Reinforcement Learning

Abstract

Inverse reinforcement learning addresses the general problem of recovering a reward function from samples of a policy provided by an expert/demonstrator. In this paper, we introduce active learning for inverse reinforcement learning. We propose an algorithm that allows the agent to query the demonstrator for samples at specific states, instead of relying only on samples provided at “arbitrary” states. The purpose of our algorithm is to estimate the reward function with similar accuracy as other methods from the literature while reducing the amount of policy samples required from the expert. We also discuss the use of our algorithm in higher dimensional problems, using both Monte Carlo and gradient methods. We present illustrative results of our algorithm in several simulated examples of different complexities.

Cite

Text

Lopes et al. "Active Learning for Reward Estimation in Inverse Reinforcement Learning." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2009. doi:10.1007/978-3-642-04174-7_3

Markdown

[Lopes et al. "Active Learning for Reward Estimation in Inverse Reinforcement Learning." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2009.](https://mlanthology.org/ecmlpkdd/2009/lopes2009ecmlpkdd-active/) doi:10.1007/978-3-642-04174-7_3

BibTeX

@inproceedings{lopes2009ecmlpkdd-active,
  title     = {{Active Learning for Reward Estimation in Inverse Reinforcement Learning}},
  author    = {Lopes, Manuel and Melo, Francisco S. and Montesano, Luis},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2009},
  pages     = {31-46},
  doi       = {10.1007/978-3-642-04174-7_3},
  url       = {https://mlanthology.org/ecmlpkdd/2009/lopes2009ecmlpkdd-active/}
}