Dropout Disagreement: A Recipe for Group Robustness with Fewer Annotations
Abstract
Empirical risk minimization (ERM) of neural networks can cause over-reliance on spurious correlations and poor generalization on minority groups. Deep feature reweighting (DFR) improves group robustness via last-layer retraining, but it requires full group and class annotations for the reweighting dataset. To eliminate this impractical requirement, we propose a one-shot active learning method which constructs the reweighting dataset with the disagreement points between the ERM model with and without dropout activated. Our experiments show our approach achieves 94% of DFR performance on the Waterbirds and CelebA datasets despite using no group annotations and up to 21$\times$ fewer class annotations.
Cite
Text
LaBonte et al. "Dropout Disagreement: A Recipe for Group Robustness with Fewer Annotations." NeurIPS 2022 Workshops: DistShift, 2022.Markdown
[LaBonte et al. "Dropout Disagreement: A Recipe for Group Robustness with Fewer Annotations." NeurIPS 2022 Workshops: DistShift, 2022.](https://mlanthology.org/neuripsw/2022/labonte2022neuripsw-dropout/)BibTeX
@inproceedings{labonte2022neuripsw-dropout,
title = {{Dropout Disagreement: A Recipe for Group Robustness with Fewer Annotations}},
author = {LaBonte, Tyler and Muthukumar, Vidya and Kumar, Abhishek},
booktitle = {NeurIPS 2022 Workshops: DistShift},
year = {2022},
url = {https://mlanthology.org/neuripsw/2022/labonte2022neuripsw-dropout/}
}