Switch-Based Active Deep Dyna-Q: Efficient Adaptive Planning for Task-Completion Dialogue Policy Learning

Abstract

Training task-completion dialogue agents with reinforcement learning usually requires a large number of real user experiences. The Dyna-Q algorithm extends Q-learning by integrating a world model, and thus can effectively boost training efficiency using simulated experiences generated by the world model. The effectiveness of Dyna-Q, however, depends on the quality of the world model - or implicitly, the pre-specified ratio of real vs. simulated experiences used for Q-learning. To this end, we extend the recently proposed Deep Dyna-Q (DDQ) framework by integrating a switcher that automatically determines whether to use a real or simulated experience for Q-learning. Furthermore, we explore the use of active learning for improving sample efficiency, by encouraging the world model to generate simulated experiences in the stateaction space where the agent has not (fully) explored. Our results show that by combining switcher and active learning, the new framework named as Switch-based Active Deep Dyna-Q (Switch-DDQ), leads to significant improvement over DDQ and Q-learning baselines in both simulation and human evaluations.1

Cite

Text

Wu et al. "Switch-Based Active Deep Dyna-Q: Efficient Adaptive Planning for Task-Completion Dialogue Policy Learning." AAAI Conference on Artificial Intelligence, 2019. doi:10.1609/AAAI.V33I01.33017289

Markdown

[Wu et al. "Switch-Based Active Deep Dyna-Q: Efficient Adaptive Planning for Task-Completion Dialogue Policy Learning." AAAI Conference on Artificial Intelligence, 2019.](https://mlanthology.org/aaai/2019/wu2019aaai-switch/) doi:10.1609/AAAI.V33I01.33017289

BibTeX

@inproceedings{wu2019aaai-switch,
  title     = {{Switch-Based Active Deep Dyna-Q: Efficient Adaptive Planning for Task-Completion Dialogue Policy Learning}},
  author    = {Wu, Yuexin and Li, Xiujun and Liu, Jingjing and Gao, Jianfeng and Yang, Yiming},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {7289-7296},
  doi       = {10.1609/AAAI.V33I01.33017289},
  url       = {https://mlanthology.org/aaai/2019/wu2019aaai-switch/}
}