Masked-Attention Mask Transformer for Universal Image Segmentation
Abstract
Image segmentation groups pixels with different semantics, e.g., category or instance membership. Each choice of semantics defines a task. While only the semantics of each task differ, current research focuses on designing specialized architectures for each task. We present Masked-attention Mask Transformer (Mask2Former), a new architecture capable of addressing any image segmentation task (panoptic, instance or semantic). Its key components include masked attention, which extracts localized features by constraining cross-attention within predicted mask regions. In addition to reducing the research effort by at least three times, it outperforms the best specialized architectures by a significant margin on four popular datasets. Most notably, Mask2Former sets a new state-of-the-art for panoptic segmentation (57.8 PQ on COCO), instance segmentation (50.1 AP on COCO) and semantic segmentation (57.7 mIoU on ADE20K).
Cite
Text
Cheng et al. "Masked-Attention Mask Transformer for Universal Image Segmentation." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00135Markdown
[Cheng et al. "Masked-Attention Mask Transformer for Universal Image Segmentation." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/cheng2022cvpr-maskedattention/) doi:10.1109/CVPR52688.2022.00135BibTeX
@inproceedings{cheng2022cvpr-maskedattention,
title = {{Masked-Attention Mask Transformer for Universal Image Segmentation}},
author = {Cheng, Bowen and Misra, Ishan and Schwing, Alexander G. and Kirillov, Alexander and Girdhar, Rohit},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2022},
pages = {1290-1299},
doi = {10.1109/CVPR52688.2022.00135},
url = {https://mlanthology.org/cvpr/2022/cheng2022cvpr-maskedattention/}
}