Wasserstein Learning of Determinantal Point Processes

Abstract

Determinantal point processes (DPPs) have received significant attention as an elegant probabilistic model for discrete subset selection. Most prior work on DPP learning focuses on maximum likelihood estimation (MLE). While efficient and scalable, MLE approaches do not leverage any subset similarity information and may fail to recover the true generative distribution of discrete data. In this work, by deriving a differentiable relaxation of a DPP sampling algorithm, we present a novel approach for learning DPPs that minimizes the Wasserstein distance between the model and data composed of observed subsets. Through an evaluation on a real-world dataset, we show that our Wasserstein learning approach provides significantly improved predictive performance on a generative task compared to DPPs trained using MLE.

Cite

Text

Anquetil et al. "Wasserstein Learning of Determinantal Point Processes." NeurIPS 2020 Workshops: LMCA, 2020.

Markdown

[Anquetil et al. "Wasserstein Learning of Determinantal Point Processes." NeurIPS 2020 Workshops: LMCA, 2020.](https://mlanthology.org/neuripsw/2020/anquetil2020neuripsw-wasserstein/)

BibTeX

@inproceedings{anquetil2020neuripsw-wasserstein,
  title     = {{Wasserstein Learning of Determinantal Point Processes}},
  author    = {Anquetil, Lucas and Gartrell, Mike and Rakotomamonjy, Alain and Tanielian, Ugo and Calauzènes, Clément},
  booktitle = {NeurIPS 2020 Workshops: LMCA},
  year      = {2020},
  url       = {https://mlanthology.org/neuripsw/2020/anquetil2020neuripsw-wasserstein/}
}