Cooperative Self-Training for Multi-Target Adaptive Semantic Segmentation

Abstract

In this work we address multi-target domain adaptation (MTDA) in semantic segmentation, which consists in adapting a single model from an annotated source dataset to multiple unannotated target datasets that differ in their underlying data distributions. To address MTDA, we propose a self-training strategy that employs pseudo-labels to induce cooperation among multiple domain-specific classifiers. We employ feature stylization as an efficient way to generate image views that forms an integral part of self-training. Additionally, to prevent the network from overfitting to noisy pseudo-labels, we devise a rectification strategy that leverages the predictions from different classifiers to estimate the quality of pseudo-labels. Our extensive experiments on numerous settings, based on four different semantic segmentation datasets, validates the effectiveness of the proposed self-training strategy and shows that our method outperforms state-of-the-art MTDA approaches.

Cite

Text

Zhang et al. "Cooperative Self-Training for Multi-Target Adaptive Semantic Segmentation." Winter Conference on Applications of Computer Vision, 2023.

Markdown

[Zhang et al. "Cooperative Self-Training for Multi-Target Adaptive Semantic Segmentation." Winter Conference on Applications of Computer Vision, 2023.](https://mlanthology.org/wacv/2023/zhang2023wacv-cooperative/)

BibTeX

@inproceedings{zhang2023wacv-cooperative,
  title     = {{Cooperative Self-Training for Multi-Target Adaptive Semantic Segmentation}},
  author    = {Zhang, Yangsong and Roy, Subhankar and Lu, Hongtao and Ricci, Elisa and Lathuilière, Stéphane},
  booktitle = {Winter Conference on Applications of Computer Vision},
  year      = {2023},
  pages     = {5604-5613},
  url       = {https://mlanthology.org/wacv/2023/zhang2023wacv-cooperative/}
}