VideoMamba: Spatio-Temporal Selective State Space Model

Abstract

We introduce VideoMamba, a novel adaptation of the pure Mamba architecture, specifically designed for video recognition. Unlike transformers that rely on self-attention mechanisms leading to high computational costs by quadratic complexity, VideoMamba leverages Mamba’s linear complexity and selective SSM mechanism for more efficient processing. The proposed Spatio-Temporal Forward and Backward SSM allows the model to effectively capture the complex relationship between non-sequential spatial and sequential temporal information in video. Consequently, VideoMamba is not only resource-efficient but also effective in capturing long-range dependency in videos, demonstrated by competitive performance and outstanding efficiency on a variety of video understanding benchmarks. Our work highlights the potential of VideoMamba as a powerful tool for video understanding, offering a simple yet effective baseline for future research in video analysis.

Cite

Text

Park et al. "VideoMamba: Spatio-Temporal Selective State Space Model." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72698-9_1

Markdown

[Park et al. "VideoMamba: Spatio-Temporal Selective State Space Model." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/park2024eccv-videomamba/) doi:10.1007/978-3-031-72698-9_1

BibTeX

@inproceedings{park2024eccv-videomamba,
  title     = {{VideoMamba: Spatio-Temporal Selective State Space Model}},
  author    = {Park, Jinyoung and Kim, Hee-Seon and Ko, Kangwook and Kim, Minbeom and Kim, Changick},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-72698-9_1},
  url       = {https://mlanthology.org/eccv/2024/park2024eccv-videomamba/}
}