Scaling Vision Transformers to 22 Billion Parameters
Abstract
The scaling of Transformers has driven breakthrough capabilities for language models. At present, the largest large language models (LLMs) contain upwards of 100B parameters. Vision Transformers (ViT) have introduced the same architecture to image and video modelling, but these have not yet been successfully scaled to nearly the same degree; the largest dense ViT contains 4B parameters (Chen et al., 2022). We present a recipe for highly efficient and stable training of a 22B-parameter ViT (ViT-22B) and perform a wide variety of experiments on the resulting model. When evaluated on downstream tasks (often with a lightweight linear model on frozen features), ViT-22B demonstrates increasing performance with scale. We further observe other interesting benefits of scale, including an improved tradeoff between fairness and performance, state-of-the-art alignment to human visual perception in terms of shape/texture bias, and improved robustness. ViT-22B demonstrates the potential for "LLM-like" scaling in vision, and provides key steps towards getting there.
Cite
Text
Dehghani et al. "Scaling Vision Transformers to 22 Billion Parameters." International Conference on Machine Learning, 2023.Markdown
[Dehghani et al. "Scaling Vision Transformers to 22 Billion Parameters." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/dehghani2023icml-scaling/)BibTeX
@inproceedings{dehghani2023icml-scaling,
title = {{Scaling Vision Transformers to 22 Billion Parameters}},
author = {Dehghani, Mostafa and Djolonga, Josip and Mustafa, Basil and Padlewski, Piotr and Heek, Jonathan and Gilmer, Justin and Steiner, Andreas Peter and Caron, Mathilde and Geirhos, Robert and Alabdulmohsin, Ibrahim and Jenatton, Rodolphe and Beyer, Lucas and Tschannen, Michael and Arnab, Anurag and Wang, Xiao and Riquelme Ruiz, Carlos and Minderer, Matthias and Puigcerver, Joan and Evci, Utku and Kumar, Manoj and Van Steenkiste, Sjoerd and Elsayed, Gamaleldin Fathy and Mahendran, Aravindh and Yu, Fisher and Oliver, Avital and Huot, Fantine and Bastings, Jasmijn and Collier, Mark and Gritsenko, Alexey A. and Birodkar, Vighnesh and Vasconcelos, Cristina Nader and Tay, Yi and Mensink, Thomas and Kolesnikov, Alexander and Pavetic, Filip and Tran, Dustin and Kipf, Thomas and Lucic, Mario and Zhai, Xiaohua and Keysers, Daniel and Harmsen, Jeremiah J. and Houlsby, Neil},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {7480-7512},
volume = {202},
url = {https://mlanthology.org/icml/2023/dehghani2023icml-scaling/}
}