Co-Attentive Multi-Task Learning for Explainable Recommendation

Abstract

Despite widespread adoption, recommender systems remain mostly black boxes. Recently, providing explanations about why items are recommended has attracted increasing attention due to its capability to enhance user trust and satisfaction. In this paper, we propose a co-attentive multi-task learning model for explainable recommendation. Our model improves both prediction accuracy and explainability of recommendation by fully exploiting the correlations between the recommendation task and the explanation task. In particular, we design an encoder-selector-decoder architecture inspired by human's information-processing model in cognitive psychology. We also propose a hierarchical co-attentive selector to effectively model the cross knowledge transferred for both tasks. Our model not only enhances prediction accuracy of the recommendation task, but also generates linguistic explanations that are fluent, useful, and highly personalized. Experiments on three public datasets demonstrate the effectiveness of our model.

Cite

Text

Chen et al. "Co-Attentive Multi-Task Learning for Explainable Recommendation." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/296

Markdown

[Chen et al. "Co-Attentive Multi-Task Learning for Explainable Recommendation." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/chen2019ijcai-co/) doi:10.24963/IJCAI.2019/296

BibTeX

@inproceedings{chen2019ijcai-co,
  title     = {{Co-Attentive Multi-Task Learning for Explainable Recommendation}},
  author    = {Chen, Zhongxia and Wang, Xiting and Xie, Xing and Wu, Tong and Bu, Guoqing and Wang, Yining and Chen, Enhong},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {2137-2143},
  doi       = {10.24963/IJCAI.2019/296},
  url       = {https://mlanthology.org/ijcai/2019/chen2019ijcai-co/}
}