Learning Unforgotten Domain-Invariant Representations for Online Unsupervised Domain Adaptation
Abstract
Existing unsupervised domain adaptation (UDA) studies focus on transferring knowledge in an offline manner. However, many tasks involve online requirements, especially in real-time systems. In this paper, we discuss Online UDA (OUDA) which assumes that the target samples are arriving sequentially as a small batch. OUDA tasks are challenging for prior UDA methods since online training suffers from catastrophic forgetting which leads to poor generalization. Intuitively, a good memory is a crucial factor in the success of OUDA. We formalize this intuition theoretically with a generalization bound where the OUDA target error can be bounded by the source error, the domain discrepancy distance, and a novel metric on forgetting in continuous online learning. Our theory illustrates the tradeoffs inherent in learning and remembering representations for OUDA. To minimize the proposed forgetting metric, we propose a novel source feature distillation (SFD) method which utilizes the source-only model as a teacher to guide the online training. In the experiment, we modify three UDA algorithms, i.e., DANN, CDAN, and MCC, and evaluate their performance on OUDA tasks with real-world datasets. By applying SFD, the performance of all baselines is significantly improved.
Cite
Text
Feng et al. "Learning Unforgotten Domain-Invariant Representations for Online Unsupervised Domain Adaptation." International Joint Conference on Artificial Intelligence, 2022. doi:10.24963/IJCAI.2022/410Markdown
[Feng et al. "Learning Unforgotten Domain-Invariant Representations for Online Unsupervised Domain Adaptation." International Joint Conference on Artificial Intelligence, 2022.](https://mlanthology.org/ijcai/2022/feng2022ijcai-learning/) doi:10.24963/IJCAI.2022/410BibTeX
@inproceedings{feng2022ijcai-learning,
title = {{Learning Unforgotten Domain-Invariant Representations for Online Unsupervised Domain Adaptation}},
author = {Feng, Cheng and Zhong, Chaoliang and Wang, Jie and Zhang, Ying and Sun, Jun and Yokota, Yasuto},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2022},
pages = {2958-2965},
doi = {10.24963/IJCAI.2022/410},
url = {https://mlanthology.org/ijcai/2022/feng2022ijcai-learning/}
}