Vision Transformers Are Good Mask Auto-Labelers

Abstract

We propose Mask Auto-Labeler (MAL), a high-quality Transformer-based mask auto-labeling framework for instance segmentation using only box annotations. MAL takes box-cropped images as inputs and conditionally generates their mask pseudo-labels.We show that Vision Transformers are good mask auto-labelers. Our method significantly reduces the gap between auto-labeling and human annotation regarding mask quality. Instance segmentation models trained using the MAL-generated masks can nearly match the performance of their fully-supervised counterparts, retaining up to 97.4% performance of fully supervised models. The best model achieves 44.1% mAP on COCO instance segmentation (test-dev 2017), outperforming state-of-the-art box-supervised methods by significant margins. Qualitative results indicate that masks produced by MAL are, in some cases, even better than human annotations.

Cite

Text

Lan et al. "Vision Transformers Are Good Mask Auto-Labelers." Conference on Computer Vision and Pattern Recognition, 2023. doi:10.1109/CVPR52729.2023.02274

Markdown

[Lan et al. "Vision Transformers Are Good Mask Auto-Labelers." Conference on Computer Vision and Pattern Recognition, 2023.](https://mlanthology.org/cvpr/2023/lan2023cvpr-vision/) doi:10.1109/CVPR52729.2023.02274

BibTeX

@inproceedings{lan2023cvpr-vision,
  title     = {{Vision Transformers Are Good Mask Auto-Labelers}},
  author    = {Lan, Shiyi and Yang, Xitong and Yu, Zhiding and Wu, Zuxuan and Alvarez, Jose M. and Anandkumar, Anima},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2023},
  pages     = {23745-23755},
  doi       = {10.1109/CVPR52729.2023.02274},
  url       = {https://mlanthology.org/cvpr/2023/lan2023cvpr-vision/}
}