Big Transfer (BiT): General Visual Representation Learning
Abstract
Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes — from 1 example per class to 1 M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance.
Cite
Text
Kolesnikov et al. "Big Transfer (BiT): General Visual Representation Learning." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58558-7_29Markdown
[Kolesnikov et al. "Big Transfer (BiT): General Visual Representation Learning." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/kolesnikov2020eccv-big/) doi:10.1007/978-3-030-58558-7_29BibTeX
@inproceedings{kolesnikov2020eccv-big,
title = {{Big Transfer (BiT): General Visual Representation Learning}},
author = {Kolesnikov, Alexander and Beyer, Lucas and Zhai, Xiaohua and Puigcerver, Joan and Yung, Jessica and Gelly, Sylvain and Houlsby, Neil},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58558-7_29},
url = {https://mlanthology.org/eccv/2020/kolesnikov2020eccv-big/}
}