Multi-View Domain Adaptive Object Detection on Camera Networks

Abstract

In this paper, we study a new domain adaptation setting on camera networks, namely Multi-View Domain Adaptive Object Detection (MVDA-OD), in which labeled source data is unavailable in the target adaptation process and target data is captured from multiple overlapping cameras. In such a challenging context, existing methods including adversarial training and self-training fall short due to multi-domain data shift and the lack of source data. To tackle this problem, we propose a novel training framework consisting of two stages. First, we pre-train the backbone using self-supervised learning, in which a multi-view association is developed to construct an effective pretext task. Second, we fine-tune the detection head using robust self-training, where a tracking-based single-view augmentation is introduced to achieve weak-hard consistency learning. By doing so, an object detection model can take advantage of informative samples generated by multi-view association and single-view augmentation to learn discriminative backbones as well as robust detection classifiers. Experiments on two real-world multi-camera datasets demonstrate significant advantages of our approach over the state-of-the-art domain adaptive object detection methods.

Cite

Text

Lu et al. "Multi-View Domain Adaptive Object Detection on Camera Networks." AAAI Conference on Artificial Intelligence, 2023. doi:10.1609/AAAI.V37I7.26077

Markdown

[Lu et al. "Multi-View Domain Adaptive Object Detection on Camera Networks." AAAI Conference on Artificial Intelligence, 2023.](https://mlanthology.org/aaai/2023/lu2023aaai-multi/) doi:10.1609/AAAI.V37I7.26077

BibTeX

@inproceedings{lu2023aaai-multi,
  title     = {{Multi-View Domain Adaptive Object Detection on Camera Networks}},
  author    = {Lu, Yan and Zhong, Zhun and Shu, Yuanchao},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2023},
  pages     = {8966-8974},
  doi       = {10.1609/AAAI.V37I7.26077},
  url       = {https://mlanthology.org/aaai/2023/lu2023aaai-multi/}
}