Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design
Abstract
Scaling laws have been recently employed to derive compute-optimal model size (number of parameters) for a given compute duration. We advance and refine such methods to infer compute-optimal model shapes, such as width and depth, and successfully implement this in vision transformers. Our shape-optimized vision transformer, SoViT, achieves results competitive with models that exceed twice its size, despite being pre-trained with an equivalent amount of compute. For example, SoViT-400m/14 achieves 90.3% fine-tuning accuracy on ILSRCV2012, surpassing the much larger ViT-g/14 and approaching ViT-G/14 under identical settings, with also less than half the inference cost. We conduct a thorough evaluation across multiple tasks, such as image classification, captioning, VQA and zero-shot transfer, demonstrating the effectiveness of our model across a broad range of domains and identifying limitations. Overall, our findings challenge the prevailing approach of blindly scaling up vision models and pave a path for a more informed scaling.
Cite
Text
Alabdulmohsin et al. "Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design." Neural Information Processing Systems, 2023.Markdown
[Alabdulmohsin et al. "Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/alabdulmohsin2023neurips-getting/)BibTeX
@inproceedings{alabdulmohsin2023neurips-getting,
title = {{Getting ViT in Shape: Scaling Laws for Compute-Optimal Model Design}},
author = {Alabdulmohsin, Ibrahim M and Zhai, Xiaohua and Kolesnikov, Alexander and Beyer, Lucas},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/alabdulmohsin2023neurips-getting/}
}