Improving Reinforcement Learning with Human Input
Abstract
Reinforcement learning (RL) has had many successes when learning autonomously. This paper and accompanying talk consider how to make use of a non-technical human participant, when available. In particular, we consider the case where a human could 1) provide demonstrations of good behavior, 2) provide online evaluative feedback, or 3) define a curriculum of tasks for the agent to learn on. In all cases, our work has shown such information can be effectively leveraged. After giving a high-level overview of this work, we will highlight a set of open questions and suggest where future work could be usefully focused.
Cite
Text
Taylor. "Improving Reinforcement Learning with Human Input." International Joint Conference on Artificial Intelligence, 2018. doi:10.24963/IJCAI.2018/817Markdown
[Taylor. "Improving Reinforcement Learning with Human Input." International Joint Conference on Artificial Intelligence, 2018.](https://mlanthology.org/ijcai/2018/taylor2018ijcai-improving/) doi:10.24963/IJCAI.2018/817BibTeX
@inproceedings{taylor2018ijcai-improving,
title = {{Improving Reinforcement Learning with Human Input}},
author = {Taylor, Matthew E.},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2018},
pages = {5724-5728},
doi = {10.24963/IJCAI.2018/817},
url = {https://mlanthology.org/ijcai/2018/taylor2018ijcai-improving/}
}