Differentiable Clustering and Partial Fenchel-Young Losses

Abstract

We introduce a differentiable clustering method based on stochastic perturbations of minimum-weight spanning forests. This allows us to include clustering in end-to-end trainable pipelines, with efficient gradients. We show that our method performs well even in difficult settings, such as data sets with high noise and challenging geometries. We also formulate an ad hoc loss to efficiently learn from partial clustering data using this operation. We demonstrate its performance on several data sets for supervised and semi-supervised tasks.

Cite

Text

Stewart et al. "Differentiable Clustering and Partial Fenchel-Young Losses." ICML 2023 Workshops: Differentiable_Almost_Everything, 2023.

Markdown

[Stewart et al. "Differentiable Clustering and Partial Fenchel-Young Losses." ICML 2023 Workshops: Differentiable_Almost_Everything, 2023.](https://mlanthology.org/icmlw/2023/stewart2023icmlw-differentiable/)

BibTeX

@inproceedings{stewart2023icmlw-differentiable,
  title     = {{Differentiable Clustering and Partial Fenchel-Young Losses}},
  author    = {Stewart, Lawrence and Bach, Francis and Llinares-López, Felipe and Berthet, Quentin},
  booktitle = {ICML 2023 Workshops: Differentiable_Almost_Everything},
  year      = {2023},
  url       = {https://mlanthology.org/icmlw/2023/stewart2023icmlw-differentiable/}
}