Efficient Active Domain Adaptation for Semantic Segmentation by Selecting Information-Rich Superpixels
Abstract
Unsupervised Domain Adaptation (UDA) for semantic segmentation has been widely studied to exploit the label-rich source data to assist the segmentation of unlabeled samples on target domain. Despite these efforts, UDA performance remains far below that of fully-supervised model owing to the lack of target annotations. To this end, we propose an efficient superpixel-level active learning method for domain adaptive semantic segmentation to maximize segmentation performance by automatically querying a small number of superpixels for labeling. To conserve annotation resources, we propose a novel low-uncertainty superpixel fusion module which amalgamates superpixels possessing low-uncertainty features based on feature affinity and thereby ensuring high-quality fusion of superpixels. As for the acquisition strategy, our method takes into account two types of information-rich superpixels: large-size superpixels with substantial information content, and superpixels with the greatest value for domain adaptation learning. Further, we employ the cross-domain mixing and pseudo label with consistency regularization techniques respectively to address the domain shift and label noise problems. Extensive experimentation demonstrates that our proposed superpixel-level method utilizes a limited budget more efficiently than previous pixel-level techniques and surpasses state-of-the-art methods at 40x lower cost. Our code is available at https://github.com/EdenHazardan/ADA_superpixel.
Cite
Text
Gao et al. "Efficient Active Domain Adaptation for Semantic Segmentation by Selecting Information-Rich Superpixels." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-72754-2_23Markdown
[Gao et al. "Efficient Active Domain Adaptation for Semantic Segmentation by Selecting Information-Rich Superpixels." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/gao2024eccv-efficient/) doi:10.1007/978-3-031-72754-2_23BibTeX
@inproceedings{gao2024eccv-efficient,
title = {{Efficient Active Domain Adaptation for Semantic Segmentation by Selecting Information-Rich Superpixels}},
author = {Gao, Yuan and Wang, Zilei and Zhang, Yixin and Tu, Bohai},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-72754-2_23},
url = {https://mlanthology.org/eccv/2024/gao2024eccv-efficient/}
}