Learning Novel Policies for Tasks

Abstract

In this work, we present a reinforcement learning algorithm that can find a variety of policies (novel policies) for a task that is given by a task reward function. Our method does this by creating a second reward function that recognizes previously seen state sequences and rewards those by novelty, which is measured using autoencoders that have been trained on state sequences from previously discovered policies. We present a two-objective update technique for policy gradient algorithms in which each update of the policy is a compromise between improving the task reward and improving the novelty reward. Using this method, we end up with a collection of policies that solves a given task as well as carrying out action sequences that are distinct from one another. We demonstrate this method on maze navigation tasks, a reaching task for a simulated robot arm, and a locomotion task for a hopper. We also demonstrate the effectiveness of our approach on deceptive tasks in which policy gradient methods often get stuck.

Cite

Text

Zhang et al. "Learning Novel Policies for Tasks." International Conference on Machine Learning, 2019.

Markdown

[Zhang et al. "Learning Novel Policies for Tasks." International Conference on Machine Learning, 2019.](https://mlanthology.org/icml/2019/zhang2019icml-learning/)

BibTeX

@inproceedings{zhang2019icml-learning,
  title     = {{Learning Novel Policies for Tasks}},
  author    = {Zhang, Yunbo and Yu, Wenhao and Turk, Greg},
  booktitle = {International Conference on Machine Learning},
  year      = {2019},
  pages     = {7483-7492},
  volume    = {97},
  url       = {https://mlanthology.org/icml/2019/zhang2019icml-learning/}
}