VMamba: Visual State Space Model

Abstract

Designing computationally efficient network architectures remains an ongoing necessity in computer vision. In this paper, we adapt Mamba, a state-space language model, into VMamba, a vision backbone with linear time complexity. At the core of VMamba is a stack of Visual State-Space (VSS) blocks with the 2D Selective Scan (SS2D) module. By traversing along four scanning routes, SS2D bridges the gap between the ordered nature of 1D selective scan and the non-sequential structure of 2D vision data, which facilitates the collection of contextual information from various sources and perspectives. Based on the VSS blocks, we develop a family of VMamba architectures and accelerate them through a succession of architectural and implementation enhancements. Extensive experiments demonstrate VMamba’s promising performance across diverse visual perception tasks, highlighting its superior input scaling efficiency compared to existing benchmark models. Source code is available at https://github.com/MzeroMiko/VMamba

Cite

Text

Liu et al. "VMamba: Visual State Space Model." Neural Information Processing Systems, 2024. doi:10.52202/079017-3273

Markdown

[Liu et al. "VMamba: Visual State Space Model." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/liu2024neurips-vmamba/) doi:10.52202/079017-3273

BibTeX

@inproceedings{liu2024neurips-vmamba,
  title     = {{VMamba: Visual State Space Model}},
  author    = {Liu, Yue and Tian, Yunjie and Zhao, Yuzhong and Yu, Hongtian and Xie, Lingxi and Wang, Yaowei and Ye, Qixiang and Jiao, Jianbin and Liu, Yunfan},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-3273},
  url       = {https://mlanthology.org/neurips/2024/liu2024neurips-vmamba/}
}