CMT-DeepLab: Clustering Mask Transformers for Panoptic Segmentation

Abstract

We propose Clustering Mask Transformer (CMT-DeepLab), a transformer-based framework for panoptic segmentation designed around clustering. It rethinks the existing transformer architectures used in segmentation and detection; CMT-DeepLab considers the object queries as cluster centers, which fill the role of grouping the pixels when applied to segmentation. The clustering is computed with an alternating procedure, by first assigning pixels to the clusters by their feature affinity, and then updating the cluster centers and pixel features. Together, these operations comprise the Clustering Mask Transformer (CMT) layer, which produces cross-attention that is denser and more consistent with the final segmentation task. CMT-DeepLab improves the performance over prior art significantly by 4.4% PQ, achieving a new state-of-the-art of 55.7% PQ on the COCO test-dev set.

Cite

Text

Yu et al. "CMT-DeepLab: Clustering Mask Transformers for Panoptic Segmentation." Conference on Computer Vision and Pattern Recognition, 2022. doi:10.1109/CVPR52688.2022.00259

Markdown

[Yu et al. "CMT-DeepLab: Clustering Mask Transformers for Panoptic Segmentation." Conference on Computer Vision and Pattern Recognition, 2022.](https://mlanthology.org/cvpr/2022/yu2022cvpr-cmtdeeplab/) doi:10.1109/CVPR52688.2022.00259

BibTeX

@inproceedings{yu2022cvpr-cmtdeeplab,
  title     = {{CMT-DeepLab: Clustering Mask Transformers for Panoptic Segmentation}},
  author    = {Yu, Qihang and Wang, Huiyu and Kim, Dahun and Qiao, Siyuan and Collins, Maxwell and Zhu, Yukun and Adam, Hartwig and Yuille, Alan and Chen, Liang-Chieh},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
  pages     = {2560-2570},
  doi       = {10.1109/CVPR52688.2022.00259},
  url       = {https://mlanthology.org/cvpr/2022/yu2022cvpr-cmtdeeplab/}
}