Location-Aware Self-Supervised Transformers for Semantic Segmentation

Abstract

Pixel-level labels are particularly expensive to acquire. Hence, pretraining is a critical step to improve models on a task like semantic segmentation. However, prominent algorithms for pretraining neural networks use image-level objectives, e.g. image classification, image-text alignment a la CLIP, or self-supervised contrastive learning. These objectives do not model spatial information, which might be sub-optimal when finetuning on downstream tasks with spatial reasoning. In this work, we pretrain networks with a location-aware (LOCA) self-supervised method which fosters the emergence of strong dense features. Specifically, we use both a patch-level clustering scheme to mine dense pseudo-labels and a relative location prediction task to encourage learning about object parts and their spatial arrangement. Our experiments show that LOCA pretraining leads to representations that transfer competitively to challenging and diverse semantic segmentation datasets.

Cite

Text

Caron et al. "Location-Aware Self-Supervised Transformers for Semantic Segmentation." Winter Conference on Applications of Computer Vision, 2024.

Markdown

[Caron et al. "Location-Aware Self-Supervised Transformers for Semantic Segmentation." Winter Conference on Applications of Computer Vision, 2024.](https://mlanthology.org/wacv/2024/caron2024wacv-locationaware/)

BibTeX

@inproceedings{caron2024wacv-locationaware,
  title     = {{Location-Aware Self-Supervised Transformers for Semantic Segmentation}},
  author    = {Caron, Mathilde and Houlsby, Neil and Schmid, Cordelia},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2024},
  pages     = {117-127},
  url       = {https://mlanthology.org/wacv/2024/caron2024wacv-locationaware/}
}