Leveraging Human Guidance for Deep Reinforcement Learning Tasks
Abstract
Reinforcement learning agents can learn to solve sequential decision tasks by interacting with the environment. Human knowledge of how to solve these tasks can be incorporated using imitation learning, where the agent learns to imitate human demonstrated decisions. However, human guidance is not limited to the demonstrations. Other types of guidance could be more suitable for certain tasks and require less human effort. This survey provides a high-level overview of five recent learning frameworks that primarily rely on human guidance other than conventional, step-by-step action demonstrations. We review the motivation, assumption, and implementation of each framework. We then discuss possible future research directions.
Cite
Text
Zhang et al. "Leveraging Human Guidance for Deep Reinforcement Learning Tasks." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/884Markdown
[Zhang et al. "Leveraging Human Guidance for Deep Reinforcement Learning Tasks." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/zhang2019ijcai-leveraging/) doi:10.24963/IJCAI.2019/884BibTeX
@inproceedings{zhang2019ijcai-leveraging,
title = {{Leveraging Human Guidance for Deep Reinforcement Learning Tasks}},
author = {Zhang, Ruohan and Torabi, Faraz and Guan, Lin and Ballard, Dana H. and Stone, Peter},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2019},
pages = {6339-6346},
doi = {10.24963/IJCAI.2019/884},
url = {https://mlanthology.org/ijcai/2019/zhang2019ijcai-leveraging/}
}