When Do We Not Need Larger Vision Models?
Abstract
Scaling up the size of vision models has been the de facto standard to obtain more powerful visual representations. In this work, we discuss the point beyond which larger vision models are not necessary. We demonstrate the power of Scaling on Scales (S$^2$), whereby a pre-trained and frozen smaller vision model (e.g., ViT-B or ViT-L), run over multiple image scales, can outperform larger models (e.g., ViT-H or ViT-G) on classification, segmentation, depth estimation, Multimodal LLM (MLLM) benchmarks, and robotic manipulation. We further show that features of larger vision models can be well approximated by those of multi-scale smaller models through a linear transform, which suggests a multi-scale smaller model has comparable learning capacity to a larger model.
Cite
Text
Shi et al. "When Do We Not Need Larger Vision Models?." NeurIPS 2024 Workshops: SSL, 2024.Markdown
[Shi et al. "When Do We Not Need Larger Vision Models?." NeurIPS 2024 Workshops: SSL, 2024.](https://mlanthology.org/neuripsw/2024/shi2024neuripsw-we/)BibTeX
@inproceedings{shi2024neuripsw-we,
title = {{When Do We Not Need Larger Vision Models?}},
author = {Shi, Baifeng and Wu, Ziyang and Mao, Maolin and Wang, Xin and Darrell, Trevor},
booktitle = {NeurIPS 2024 Workshops: SSL},
year = {2024},
url = {https://mlanthology.org/neuripsw/2024/shi2024neuripsw-we/}
}