Memory-Efficient Vision Transformers: An Activation-Aware Mixed-Rank Compression Strategy

Cite

Text

Azizi et al. "Memory-Efficient Vision Transformers: An Activation-Aware Mixed-Rank Compression Strategy." European Conference on Computer Vision Workshops, 2024. doi:10.1007/978-3-031-91979-4_6

Markdown

[Azizi et al. "Memory-Efficient Vision Transformers: An Activation-Aware Mixed-Rank Compression Strategy." European Conference on Computer Vision Workshops, 2024.](https://mlanthology.org/eccvw/2024/azizi2024eccvw-memoryefficient/) doi:10.1007/978-3-031-91979-4_6

BibTeX

@inproceedings{azizi2024eccvw-memoryefficient,
  title     = {{Memory-Efficient Vision Transformers: An Activation-Aware Mixed-Rank Compression Strategy}},
  author    = {Azizi, Seyedarmin and Nazemi, Mahdi and Pedram, Massoud},
  booktitle = {European Conference on Computer Vision Workshops},
  year      = {2024},
  pages     = {55-66},
  doi       = {10.1007/978-3-031-91979-4_6},
  url       = {https://mlanthology.org/eccvw/2024/azizi2024eccvw-memoryefficient/}
}