Learning Probabilistic Submodular Diversity Models via Noise Contrastive Estimation
Abstract
Modeling diversity of sets of items is important in many applications such as product recommendation and data summarization. Probabilistic submodular models, a family of models including the determinantal point process, form a natural class of distributions, encouraging effects such as diversity, repulsion and coverage. Current models, however, are limited to small and medium number of items due to the high time complexity for learning and inference. In this paper, we propose FLID, a novel log-submodular diversity model that scales to large numbers of items and can be efficiently learned using noise contrastive estimation. We show that our model achieves state of the art performance in terms of model fit, but can be also learned orders of magnitude faster. We demonstrate the wide applicability of our model using several experiments.
Cite
Text
Tschiatschek et al. "Learning Probabilistic Submodular Diversity Models via Noise Contrastive Estimation." International Conference on Artificial Intelligence and Statistics, 2016.Markdown
[Tschiatschek et al. "Learning Probabilistic Submodular Diversity Models via Noise Contrastive Estimation." International Conference on Artificial Intelligence and Statistics, 2016.](https://mlanthology.org/aistats/2016/tschiatschek2016aistats-learning/)BibTeX
@inproceedings{tschiatschek2016aistats-learning,
title = {{Learning Probabilistic Submodular Diversity Models via Noise Contrastive Estimation}},
author = {Tschiatschek, Sebastian and Djolonga, Josip and Krause, Andreas},
booktitle = {International Conference on Artificial Intelligence and Statistics},
year = {2016},
pages = {770-779},
url = {https://mlanthology.org/aistats/2016/tschiatschek2016aistats-learning/}
}