QUOTA: The Quantile Option Architecture for Reinforcement Learning

Abstract

In this paper, we propose the Quantile Option Architecture (QUOTA) for exploration based on recent advances in distributional reinforcement learning (RL). In QUOTA, decision making is based on quantiles of a value distribution, not only the mean. QUOTA provides a new dimension for exploration via making use of both optimism and pessimism of a value distribution. We demonstrate the performance advantage of QUOTA in both challenging video games and physical robot simulators.

Cite

Text

Zhang and Yao. "QUOTA: The Quantile Option Architecture for Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33015797

Markdown

[Zhang and Yao. "QUOTA: The Quantile Option Architecture for Reinforcement Learning." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/zhang2019aaai-quota/) doi:10.1609/AAAI.V33I01.33015797

BibTeX

@inproceedings{zhang2019aaai-quota,
  title     = {{QUOTA: The Quantile Option Architecture for Reinforcement Learning}},
  author    = {Zhang, Shangtong and Yao, Hengshuai},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {5797-5804},
  doi       = {10.1609/AAAI.V33I01.33015797},
  url       = {https://mlanthology.org/aaai/2019/zhang2019aaai-quota/}
}