Self-Supervised Learning with Local Contrastive Loss for Detection and Semantic Segmentation
Abstract
We present a self-supervised learning (SSL) method suitable for semi-global tasks such as object detection and semantic segmentation. We enforce local consistency between self-learned features, representing corresponding image locations of transformed versions of the same image, by minimizing a pixel-level local contrastive (LC) loss during training. LC-loss can be added to existing self-supervised learning methods with minimal overhead. We evaluate our SSL approach on two downstream tasks -- object detection and semantic segmentation, using COCO, PASCAL VOC, and CityScapes datasets. Our method outperforms the existing state-of-the-art SSL approaches by 1.9% on COCO object detection, 1.4% on PASCAL VOC detection, and 0.6% on CityScapes segmentation.
Cite
Text
Islam et al. "Self-Supervised Learning with Local Contrastive Loss for Detection and Semantic Segmentation." Winter Conference on Applications of Computer Vision, 2023.Markdown
[Islam et al. "Self-Supervised Learning with Local Contrastive Loss for Detection and Semantic Segmentation." Winter Conference on Applications of Computer Vision, 2023.](https://mlanthology.org/wacv/2023/islam2023wacv-selfsupervised/)BibTeX
@inproceedings{islam2023wacv-selfsupervised,
title = {{Self-Supervised Learning with Local Contrastive Loss for Detection and Semantic Segmentation}},
author = {Islam, Ashraful and Lundell, Benjamin and Sawhney, Harpreet and Sinha, Sudipta N. and Morales, Peter and Radke, Richard J.},
booktitle = {Winter Conference on Applications of Computer Vision},
year = {2023},
pages = {5624-5633},
url = {https://mlanthology.org/wacv/2023/islam2023wacv-selfsupervised/}
}