OVANet: One-vs-All Network for Universal Domain Adaptation

Abstract

Universal Domain Adaptation (UNDA) aims to handle both domain-shift and category-shift between two datasets, where the main challenge is to transfer knowledge while rejecting "unknown" classes which are absent in the labeled source data but present in the unlabeled target data. Existing methods manually set a threshold to reject "unknown" samples based on validation or a pre-defined ratio of "unknown" samples, but this strategy is not practical. In this paper, we propose a method to learn the threshold using source samples and to adapt it to the target domain. Our idea is that a minimum inter-class distance in the source domain should be a good threshold to decide between "known" or "unknown" in the target. To learn the inter- and intra-class distance, we propose to train a one-vs-all classifier for each class using labeled source data. Then, we adapt the open-set classifier to the target domain by minimizing class entropy. The resulting framework is the simplest of all baselines of UNDA and is insensitive to the value of a hyper-parameter, yet outperforms baselines with a large margin.

Cite

Text

Saito and Saenko. "OVANet: One-vs-All Network for Universal Domain Adaptation." International Conference on Computer Vision, 2021. doi:10.1109/ICCV48922.2021.00887

Markdown

[Saito and Saenko. "OVANet: One-vs-All Network for Universal Domain Adaptation." International Conference on Computer Vision, 2021.](https://mlanthology.org/iccv/2021/saito2021iccv-ovanet/) doi:10.1109/ICCV48922.2021.00887

BibTeX

@inproceedings{saito2021iccv-ovanet,
  title     = {{OVANet: One-vs-All Network for Universal Domain Adaptation}},
  author    = {Saito, Kuniaki and Saenko, Kate},
  booktitle = {International Conference on Computer Vision},
  year      = {2021},
  pages     = {9000-9009},
  doi       = {10.1109/ICCV48922.2021.00887},
  url       = {https://mlanthology.org/iccv/2021/saito2021iccv-ovanet/}
}