Scaling White-Box Transformers for Vision

Abstract

CRATE, a white-box transformer architecture designed to learn compressed and sparse representations, offers an intriguing alternative to standard vision transformers (ViTs) due to its inherent mathematical interpretability. Despite extensive investigations into the scaling behaviors of language and vision transformers, the scalability of CRATE remains an open question which this paper aims to address. Specifically, we propose CRATE-$\alpha$, featuring strategic yet minimal modifications to the sparse coding block in the CRATE architecture design, and a light training recipe designed to improve the scalability of CRATE.Through extensive experiments, we demonstrate that CRATE-$\alpha$ can effectively scale with larger model sizes and datasets. For example, our CRATE-$\alpha$-B substantially outperforms the prior best CRATE-B model accuracy on ImageNet classification by 3.7%, achieving an accuracy of 83.2%. Meanwhile, when scaling further, our CRATE-$\alpha$-L obtains an ImageNet classification accuracy of 85.1%. More notably, these model performance improvements are achieved while preserving, and potentially even enhancing the interpretability of learned CRATE models, as we demonstrate through showing that the learned token representations of increasingly larger trained CRATE-$\alpha$ models yield increasingly higher-quality unsupervised object segmentation of images.

Cite

Text

Yang et al. "Scaling White-Box Transformers for Vision." Neural Information Processing Systems, 2024. doi:10.52202/079017-1167

Markdown

[Yang et al. "Scaling White-Box Transformers for Vision." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/yang2024neurips-scaling/) doi:10.52202/079017-1167

BibTeX

@inproceedings{yang2024neurips-scaling,
  title     = {{Scaling White-Box Transformers for Vision}},
  author    = {Yang, Jinrui and Li, Xianhang and Pai, Druv and Zhou, Yuyin and Ma, Yi and Yu, Yaodong and Xie, Cihang},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-1167},
  url       = {https://mlanthology.org/neurips/2024/yang2024neurips-scaling/}
}