Towards Explainable Conversational Recommendation

Abstract

Recent studies have shown that both accuracy and explainability are important for recommendation. In this paper, we introduce explainable conversational recommendation, which enables incremental improvement of both recommendation accuracy and explanation quality through multi-turn user-model conversation. We show how the problem can be formulated, and design an incremental multi-task learning framework that enables tight collaboration between recommendation prediction, explanation generation, and user feedback integration. We also propose a multi-view feedback integration method to enable effective incremental model update. Empirical results demonstrate that our model not only consistently improves the recommendation accuracy but also generates explanations that fit user interests reflected in the feedbacks.

Cite

Text

Chen et al. "Towards Explainable Conversational Recommendation." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/414

Markdown

[Chen et al. "Towards Explainable Conversational Recommendation." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/chen2020ijcai-explainable/) doi:10.24963/IJCAI.2020/414

BibTeX

@inproceedings{chen2020ijcai-explainable,
  title     = {{Towards Explainable Conversational Recommendation}},
  author    = {Chen, Zhongxia and Wang, Xiting and Xie, Xing and Parsana, Mehul and Soni, Akshay and Ao, Xiang and Chen, Enhong},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {2994-3000},
  doi       = {10.24963/IJCAI.2020/414},
  url       = {https://mlanthology.org/ijcai/2020/chen2020ijcai-explainable/}
}