SIMPLE: A Gradient Estimator for $k$-Subset Sampling
Abstract
$k$-subset sampling is ubiquitous in machine learning, enabling regularization and interpretability through sparsity. The challenge lies in rendering $k$-subset sampling amenable to end-to-end learning. This has typically involved relaxing the reparameterized samples to allow for backpropagation, with the risk of introducing high bias and high variance. In this work, we fall back to discrete $k$-subset sampling on the forward pass. This is coupled with using the gradient with respect to the exact marginals, computed efficiently, as a proxy for the true gradient. We show that our gradient estimator, SIMPLE, exhibits lower bias and variance compared to state-of-the-art estimators, including the straight-through Gumbel estimator when $k=1$. Empirical results show improved performance on learning to explain and sparse linear regression. We give an algorithm computing the exact ELBO for the $k$-subset distribution, obtaining significantly lower loss than SOTA.
Cite
Text
Ahmed et al. "SIMPLE: A Gradient Estimator for $k$-Subset Sampling." ICML 2023 Workshops: Differentiable_Almost_Everything, 2023.Markdown
[Ahmed et al. "SIMPLE: A Gradient Estimator for $k$-Subset Sampling." ICML 2023 Workshops: Differentiable_Almost_Everything, 2023.](https://mlanthology.org/icmlw/2023/ahmed2023icmlw-simple/)BibTeX
@inproceedings{ahmed2023icmlw-simple,
title = {{SIMPLE: A Gradient Estimator for $k$-Subset Sampling}},
author = {Ahmed, Kareem and Zeng, Zhe and Niepert, Mathias and Van den Broeck, Guy},
booktitle = {ICML 2023 Workshops: Differentiable_Almost_Everything},
year = {2023},
url = {https://mlanthology.org/icmlw/2023/ahmed2023icmlw-simple/}
}