All Models Are Wrong, Some Are Useful: Model Selection with Limited Labels

Abstract

We introduce MODEL SELECTOR, a framework for label-efficient selection of pretrained classifiers. Given a pool of unlabeled target data, MODEL SELECTOR samples a small subset of highly informative examples for labeling, in order to efficiently identify the best pretrained model for deployment on this target dataset. Through extensive experiments, we demonstrate that MODEL SELECTOR drastically reduces the need for labeled data while consistently picking the best or near-best performing model. Across 18 model collections on 16 different datasets, comprising over 1,500 pretrained models, MODEL SELECTOR reduces the labeling cost by up to 94.15% to identify the best model compared to the cost of the strongest baseline. Our results further highlight the robustness of MODEL SELECTOR in model selection, as it reduces the labeling cost by up to 72.41% when selecting a near-best model, whose accuracy is only within 1% of the best model.

Cite

Text

Okanovic et al. "All Models Are Wrong, Some Are Useful: Model Selection with Limited Labels." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.

Markdown

[Okanovic et al. "All Models Are Wrong, Some Are Useful: Model Selection with Limited Labels." Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, 2025.](https://mlanthology.org/aistats/2025/okanovic2025aistats-all/)

BibTeX

@inproceedings{okanovic2025aistats-all,
  title     = {{All Models Are Wrong, Some Are Useful: Model Selection with Limited Labels}},
  author    = {Okanovic, Patrik and Kirsch, Andreas and Kasper, Jannes and Hoefler, Torsten and Krause, Andreas and Gürel, Nezihe Merve},
  booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics},
  year      = {2025},
  pages     = {2035-2043},
  volume    = {258},
  url       = {https://mlanthology.org/aistats/2025/okanovic2025aistats-all/}
}