CP2: Copy-Paste Contrastive Pretraining for Semantic Segmentation

Abstract

Recent advances in self-supervised contrastive learning yield good image-level representation, which favors classification tasks but usually neglects pixel-level detailed information, leading to unsatisfactory transfer performance to dense prediction tasks such as semantic segmentation. In this work, we propose a pixel-wise contrastive learning method called CP2 (Copy-Paste Contrastive Pretraining), which facilitates both image- and pixel-level representation learning and therefore is more suitable for downstream dense prediction tasks. In detail, we copy-paste a random crop from an image (the foreground) onto different background images and pretrain a semantic segmentation model with the objective of 1) distinguishing the foreground pixels from the background pixels, and 2) identifying the composed images that share the same foreground. Experiments show the strong performance of CP2 in downstream semantic segmentation: By finetuning CP2 pretrained models on PASCAL VOC 2012, we obtain 78.6% mIoU with a ResNet-50 and 79.5% with a ViT-S.

Cite

Text

Wang et al. "CP2: Copy-Paste Contrastive Pretraining for Semantic Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2022. doi:10.1007/978-3-031-20056-4_29

Markdown

[Wang et al. "CP2: Copy-Paste Contrastive Pretraining for Semantic Segmentation." Proceedings of the European Conference on Computer Vision (ECCV), 2022.](https://mlanthology.org/eccv/2022/wang2022eccv-cp2/) doi:10.1007/978-3-031-20056-4_29

BibTeX

@inproceedings{wang2022eccv-cp2,
  title     = {{CP2: Copy-Paste Contrastive Pretraining for Semantic Segmentation}},
  author    = {Wang, Feng and Wang, Huiyu and Wei, Chen and Yuille, Alan and Shen, Wei},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2022},
  doi       = {10.1007/978-3-031-20056-4_29},
  url       = {https://mlanthology.org/eccv/2022/wang2022eccv-cp2/}
}