Apprenticeship Learning About Multiple Intentions
Abstract
In this paper, we apply tools from inverse reinforcement learning (IRL) to the problem of learning from (unlabeled) demonstration trajectories of behavior generated by varying ``intentions'' or objectives. We derive an EM approach that clusters observed trajectories by inferring the objectives for each cluster using any of several possible IRL methods, and then uses the constructed clusters to quickly identify the intent of a trajectory. We show that a natural approach to IRL---a gradient ascent method that modifies reward parameters to maximize the likelihood of the observed trajectories---is successful at quickly identifying unknown reward functions. We demonstrate these ideas in the context of apprenticeship learning by acquiring the preferences of a human driver in a simple highway car simulator.
Cite
Text
Babes et al. "Apprenticeship Learning About Multiple Intentions." International Conference on Machine Learning, 2011.Markdown
[Babes et al. "Apprenticeship Learning About Multiple Intentions." International Conference on Machine Learning, 2011.](https://mlanthology.org/icml/2011/babes2011icml-apprenticeship/)BibTeX
@inproceedings{babes2011icml-apprenticeship,
title = {{Apprenticeship Learning About Multiple Intentions}},
author = {Babes, Monica and Marivate, Vukosi and Subramanian, Kaushik and Littman, Michael L.},
booktitle = {International Conference on Machine Learning},
year = {2011},
pages = {897-904},
url = {https://mlanthology.org/icml/2011/babes2011icml-apprenticeship/}
}