Interactive Learning from Policy-Dependent Human Feedback

Abstract

This paper investigates the problem of interactively learning behaviors communicated by a human teacher using positive and negative feedback. Much previous work on this problem has made the assumption that people provide feedback for decisions that is dependent on the behavior they are teaching and is independent from the learner’s current policy. We present empirical results that show this assumption to be false—whether human trainers give a positive or negative feedback for a decision is influenced by the learner’s current policy. Based on this insight, we introduce Convergent Actor-Critic by Humans (COACH), an algorithm for learning from policy-dependent feedback that converges to a local optimum. Finally, we demonstrate that COACH can successfully learn multiple behaviors on a physical robot.

Cite

Text

MacGlashan et al. "Interactive Learning from Policy-Dependent Human Feedback." International Conference on Machine Learning, 2017.

Markdown

[MacGlashan et al. "Interactive Learning from Policy-Dependent Human Feedback." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/macglashan2017icml-interactive/)

BibTeX

@inproceedings{macglashan2017icml-interactive,
  title     = {{Interactive Learning from Policy-Dependent Human Feedback}},
  author    = {MacGlashan, James and Ho, Mark K. and Loftin, Robert and Peng, Bei and Wang, Guan and Roberts, David L. and Taylor, Matthew E. and Littman, Michael L.},
  booktitle = {International Conference on Machine Learning},
  year      = {2017},
  pages     = {2285-2294},
  volume    = {70},
  url       = {https://mlanthology.org/icml/2017/macglashan2017icml-interactive/}
}