Zero-Resource Knowledge-Grounded Dialogue Generation

Abstract

While neural conversation models have shown great potentials towards generating informative and engaging responses via introducing external knowledge, learning such a model often requires knowledge-grounded dialogues that are difficult to obtain. To overcome the data challenge and reduce the cost of building a knowledge-grounded dialogue system, we explore the problem under a zero-resource setting by assuming no context-knowledge-response triples are needed for training. To this end, we propose representing the knowledge that bridges a context and a response and the way that the knowledge is expressed as latent variables, and devise a variational approach that can effectively estimate a generation model from independent dialogue corpora and knowledge corpora. Evaluation results on three benchmarks of knowledge-grounded dialogue generation indicate that our model can achieve comparable performance with state-of-the-art methods that rely on knowledge-grounded dialogues for training, and exhibits a good generalization ability over different datasets.

Cite

Text

Li et al. "Zero-Resource Knowledge-Grounded Dialogue Generation." Neural Information Processing Systems, 2020.

Markdown

[Li et al. "Zero-Resource Knowledge-Grounded Dialogue Generation." Neural Information Processing Systems, 2020.](https://mlanthology.org/neurips/2020/li2020neurips-zeroresource/)

BibTeX

@inproceedings{li2020neurips-zeroresource,
  title     = {{Zero-Resource Knowledge-Grounded Dialogue Generation}},
  author    = {Li, Linxiao and Xu, Can and Wu, Wei and Zhao, Yufan and Zhao, Xueliang and Tao, Chongyang},
  booktitle = {Neural Information Processing Systems},
  year      = {2020},
  url       = {https://mlanthology.org/neurips/2020/li2020neurips-zeroresource/}
}