RecoNet: An Interpretable Neural Architecture for Recommender Systems

Abstract

Neural systems offer high predictive accuracy but are plagued by long training times and low interpretability. We present a simple neural architecture for recommender systems that lifts several of these shortcomings. Firstly, the approach has a high predictive power that is comparable to state-of-the-art recommender approaches. Secondly, owing to its simplicity, the trained model can be interpreted easily because it provides the individual contribution of each input feature to the decision. Our method is three orders of magnitude faster than general-purpose explanatory approaches, such as LIME. Finally, thanks to its design, our architecture addresses cold-start issues, and therefore the model does not require retraining in the presence of new users.

Cite

Text

Fusco et al. "RecoNet: An Interpretable Neural Architecture for Recommender Systems." International Joint Conference on Artificial Intelligence, 2019. doi:10.24963/IJCAI.2019/325

Markdown

[Fusco et al. "RecoNet: An Interpretable Neural Architecture for Recommender Systems." International Joint Conference on Artificial Intelligence, 2019.](https://mlanthology.org/ijcai/2019/fusco2019ijcai-reconet/) doi:10.24963/IJCAI.2019/325

BibTeX

@inproceedings{fusco2019ijcai-reconet,
  title     = {{RecoNet: An Interpretable Neural Architecture for Recommender Systems}},
  author    = {Fusco, Francesco and Vlachos, Michalis and Vasileiadis, Vasileios and Wardatzky, Kathrin and Schneider, Johannes},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2019},
  pages     = {2343-2349},
  doi       = {10.24963/IJCAI.2019/325},
  url       = {https://mlanthology.org/ijcai/2019/fusco2019ijcai-reconet/}
}