PackMamba: Efficient Processing of Variable-Length Sequences in Mamba Training

Abstract

With the evolution of large language models, traditional Transformer models become computationally demanding for lengthy sequences due to the quadratic growth in computation with respect to the sequence length. Mamba, emerging as a groundbreaking architecture in the field of generative AI, demonstrates remarkable proficiency in handling elongated sequences with reduced computational and memory complexity. Nevertheless, the existing training framework of Mamba presents inefficiency with variable-length sequence inputs. Either single-sequence training results in low GPU utilization, or batched processing of variable-length sequences to a maximum length incurs considerable memory and computational overhead. To address this problem, we analyze the performance of bottleneck operators in Mamba under diverse tensor shapes and propose PackMamba, a high-throughput Mamba that efficiently handles variable-length sequences. Diving deep into state-space models (SSMs), we modify the parallel operators to avoid passing information between individual sequences while maintaining high performance. Leveraging hardware-software co-optimization, this modification ensures coalesced memory access to position indices without extra kernel overhead. Experimental results on an NVIDIA A100 GPU demonstrate throughput exceeding the baseline single-sequence processing scheme: 3.06x speedup on the 1.4B model and 2.62x on the 2.8B model.

Cite

Text

Xu et al. "PackMamba: Efficient Processing of Variable-Length Sequences in Mamba Training." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-91979-4_4

Markdown

[Xu et al. "PackMamba: Efficient Processing of Variable-Length Sequences in Mamba Training." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/xu2024eccvw-packmamba/) doi:10.1007/978-3-031-91979-4_4

BibTeX

@inproceedings{xu2024eccvw-packmamba,
  title     = {{PackMamba: Efficient Processing of Variable-Length Sequences in Mamba Training}},
  author    = {Xu, Haoran and Liu, Ziqian and Fu, Rong and Su, Zhongling and Wang, Zerui and Cai, Zheng and Pei, Zhilin and Zhang, Xingcheng},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2024},
  pages     = {34-42},
  doi       = {10.1007/978-3-031-91979-4_4},
  url       = {https://mlanthology.org/eccvw/2024/xu2024eccvw-packmamba/}
}