EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss

Abstract

We present EfficientViT-SAM, a new family of accelerated segment anything models. We retain SAM’s lightweight prompt encoder and mask decoder while replacing the heavy image encoder with EfficientViT. For the training, we begin with the knowledge distillation from the SAM-ViT-H image encoder to EfficientViT. Subsequently, we conduct end-to-end training on the SA-1B dataset. Benefiting from EfficientViT’s efficiency and capacity, EfficientViT-SAM delivers 48.9× measured TensorRT speedup on A100 GPU over SAM-ViT-H without sacrificing performance. Our code and pre-trained models are released at https://github.com/mit-han-lab/efficientvit.

Cite

Text

Zhang et al. "EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024. doi:10.1109/CVPRW63382.2024.00782

Markdown

[Zhang et al. "EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss." IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.](https://mlanthology.org/cvprw/2024/zhang2024cvprw-efficientvitsam/) doi:10.1109/CVPRW63382.2024.00782

BibTeX

@inproceedings{zhang2024cvprw-efficientvitsam,
  title     = {{EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss}},
  author    = {Zhang, Zhuoyang and Cai, Han and Han, Song},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
  year      = {2024},
  pages     = {7859-7863},
  doi       = {10.1109/CVPRW63382.2024.00782},
  url       = {https://mlanthology.org/cvprw/2024/zhang2024cvprw-efficientvitsam/}
}