Combining Diverse Feature Priors
Abstract
To improve model generalization, model designers often restrict the features that their models use, either implicitly or explicitly. In this work, we explore the design space of leveraging such feature priors by viewing them as distinct perspectives on the data. Specifically, we find that models trained with diverse sets of explicit feature priors have less overlapping failure modes, and can thus be combined more effectively. Moreover, we demonstrate that jointly training such models on additional (unlabeled) data allows them to correct each other’s mistakes, which, in turn, leads to better generalization and resilience to spurious correlations.
Cite
Text
Jain et al. "Combining Diverse Feature Priors." International Conference on Machine Learning, 2022.Markdown
[Jain et al. "Combining Diverse Feature Priors." International Conference on Machine Learning, 2022.](https://mlanthology.org/icml/2022/jain2022icml-combining/)BibTeX
@inproceedings{jain2022icml-combining,
title = {{Combining Diverse Feature Priors}},
author = {Jain, Saachi and Tsipras, Dimitris and Madry, Aleksander},
booktitle = {International Conference on Machine Learning},
year = {2022},
pages = {9802-9832},
volume = {162},
url = {https://mlanthology.org/icml/2022/jain2022icml-combining/}
}