Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection
Abstract
Recommendation is a prevalent application of machine learning that affects many users; therefore, it is important for recommender models to be accurate and interpretable. In this work, we propose a method to both interpret and augment the predictions of black-box recommender systems. In particular, we propose to interpret feature interactions from a source recommender model and explicitly encode these interactions in a target recommender model, where both source and target models are black-boxes. By not assuming the structure of the recommender system, our approach can be used in general settings. In our experiments, we focus on a prominent use of machine learning recommendation: ad-click prediction. We found that our interaction interpretations are both informative and predictive, e.g., significantly outperforming existing recommender models. What's more, the same approach to interpret interactions can provide new insights into domains even beyond recommendation, such as text and image classification.
Cite
Text
Tsang et al. "Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection." International Conference on Learning Representations, 2020.Markdown
[Tsang et al. "Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection." International Conference on Learning Representations, 2020.](https://mlanthology.org/iclr/2020/tsang2020iclr-feature/)BibTeX
@inproceedings{tsang2020iclr-feature,
title = {{Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection}},
author = {Tsang, Michael and Cheng, Dehua and Liu, Hanpeng and Feng, Xue and Zhou, Eric and Liu, Yan},
booktitle = {International Conference on Learning Representations},
year = {2020},
url = {https://mlanthology.org/iclr/2020/tsang2020iclr-feature/}
}