Uniform Priors for Data-Efficient Learning

Abstract

Few or zero-shot adaptation to novel tasks is important for the scalability and deployment of machine learning models. It is therefore crucial to find properties that encourage more transferable features in deep networks for generalization. In this paper, we show that models that learn uniformly distributed features from the training data, are able to perform better transfer learning at test-time. Motivated by this, we evaluate our method: uniformity regularization (UR) on its ability to facilitate adaptation to unseen tasks and data on six distinct domains: Few-Learning with Images, Few-shot Learning with Language, Deep Metric Learning, 0-Shot Domain Adaptation, Out-of-Distribution classification, and Neural Radiance Fields. Across all experiments, we show that using UR, we are able to learn robust vision systems which consistently offer benefits over baselines trained without uniformity regularization and are able to achieve state-of-the-art performance in Deep Metric Learning, Few-shot learning with images and language.

Cite

Text

Sinha et al. "Uniform Priors for Data-Efficient Learning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022. doi:10.1109/CVPRW56347.2022.00447

Markdown

[Sinha et al. "Uniform Priors for Data-Efficient Learning." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022.](https://mlanthology.org/cvprw/2022/sinha2022cvprw-uniform/) doi:10.1109/CVPRW56347.2022.00447

BibTeX

@inproceedings{sinha2022cvprw-uniform,
  title     = {{Uniform Priors for Data-Efficient Learning}},
  author    = {Sinha, Samarth and Roth, Karsten and Goyal, Anirudh and Ghassemi, Marzyeh and Akata, Zeynep and Larochelle, Hugo and Garg, Animesh},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2022},
  pages     = {4016-4027},
  doi       = {10.1109/CVPRW56347.2022.00447},
  url       = {https://mlanthology.org/cvprw/2022/sinha2022cvprw-uniform/}
}