Leveraging Importance Weights in Subset Selection
Abstract
We present a subset selection algorithm designed to work with arbitrary model families in a practical batch setting. In such a setting, an algorithm can sample examples one at a time but, in order to limit overhead costs, is only able to update its state (i.e. further train model weights) once a large enough batch of examples is selected. Our algorithm, IWeS, selects examples by importance sampling where the sampling probability assigned to each example is based on the entropy of models trained on previously selected batches. IWeS admits significant performance improvement compared to other subset selection algorithms for seven publicly available datasets. Additionally, it is competitive in an active learning setting, where the label information is not available at selection time. We also provide an initial theoretical analysis to support our importance weighting approach, proving generalization and sampling rate bounds.
Cite
Text
Citovsky et al. "Leveraging Importance Weights in Subset Selection." International Conference on Learning Representations, 2023.Markdown
[Citovsky et al. "Leveraging Importance Weights in Subset Selection." International Conference on Learning Representations, 2023.](https://mlanthology.org/iclr/2023/citovsky2023iclr-leveraging/)BibTeX
@inproceedings{citovsky2023iclr-leveraging,
title = {{Leveraging Importance Weights in Subset Selection}},
author = {Citovsky, Gui and DeSalvo, Giulia and Kumar, Sanjiv and Ramalingam, Srikumar and Rostamizadeh, Afshin and Wang, Yunjuan},
booktitle = {International Conference on Learning Representations},
year = {2023},
url = {https://mlanthology.org/iclr/2023/citovsky2023iclr-leveraging/}
}