BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems

Abstract

We present a new algorithm that significantly improves the efficiency of exploration for deep Q-learning agents in dialogue systems. Our agents explore via Thompson sampling, drawing Monte Carlo samples from a Bayes-by-Backprop neural network. Our algorithm learns much faster than common exploration strategies such as ε-greedy, Boltzmann, bootstrapping, and intrinsic-reward-based ones. Additionally, we show that spiking the replay buffer with experiences from just a few successful episodes can make Q-learning feasible when it might otherwise fail.

Cite

Text

Lipton et al. "BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems." AAAI Conference on Artificial Intelligence, 2018. doi:10.1609/AAAI.V32I1.11946

Markdown

[Lipton et al. "BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems." AAAI Conference on Artificial Intelligence, 2018.](https://mlanthology.org/aaai/2018/lipton2018aaai-bbq/) doi:10.1609/AAAI.V32I1.11946

BibTeX

@inproceedings{lipton2018aaai-bbq,
  title     = {{BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems}},
  author    = {Lipton, Zachary C. and Li, Xiujun and Gao, Jianfeng and Li, Lihong and Ahmed, Faisal and Deng, Li},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2018},
  pages     = {5237-5244},
  doi       = {10.1609/AAAI.V32I1.11946},
  url       = {https://mlanthology.org/aaai/2018/lipton2018aaai-bbq/}
}