Adversarial Alignment for Source Free Object Detection

Abstract

Source-free object detection (SFOD) aims to transfer a detector pre-trained on a label-rich source domain to an unlabeled target domain without seeing source data. While most existing SFOD methods generate pseudo labels via a source-pretrained model to guide training, these pseudo labels usually contain high noises due to heavy domain discrepancy. In order to obtain better pseudo supervisions, we divide the target domain into source-similar and source-dissimilar parts and align them in the feature space by adversarial learning. Specifically, we design a detection variance-based criterion to divide the target domain. This criterion is motivated by a finding that larger detection variances denote higher recall and larger similarity to the source domain. Then we incorporate an adversarial module into a mean teacher framework to drive the feature spaces of these two subsets indistinguishable. Extensive experiments on multiple cross-domain object detection datasets demonstrate that our proposed method consistently outperforms the compared SFOD methods. Our implementation is available at https://github.com/ChuQiaosong.

Cite

Text

Chu et al. "Adversarial Alignment for Source Free Object Detection." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I1.25119

Markdown

[Chu et al. "Adversarial Alignment for Source Free Object Detection." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/chu2023aaai-adversarial/) doi:10.1609/AAAI.V37I1.25119

BibTeX

@inproceedings{chu2023aaai-adversarial,
  title     = {{Adversarial Alignment for Source Free Object Detection}},
  author    = {Chu, Qiaosong and Li, Shuyan and Chen, Guangyi and Li, Kai and Li, Xiu},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {452-460},
  doi       = {10.1609/AAAI.V37I1.25119},
  url       = {https://mlanthology.org/aaai/2023/chu2023aaai-adversarial/}
}