F-Domain Adversarial Learning: Theory and Algorithms
Abstract
Unsupervised domain adaptation is used in many machine learning applications where, during training, a model has access to unlabeled data in the target domain, and a related labeled dataset. In this paper, we introduce a novel and general domain-adversarial framework. Specifically, we derive a novel generalization bound for domain adaptation that exploits a new measure of discrepancy between distributions based on a variational characterization of f-divergences. It recovers the theoretical results from Ben-David et al. (2010a) as a special case and supports divergences used in practice. Based on this bound, we derive a new algorithmic framework that introduces a key correction in the original adversarial training method of Ganin et al. (2016). We show that many regularizers and ad-hoc objectives introduced over the last years in this framework are then not required to achieve performance comparable to (if not better than) state-of-the-art domain-adversarial methods. Experimental analysis conducted on real-world natural language and computer vision datasets show that our framework outperforms existing baselines, and obtains the best results for f-divergences that were not considered previously in domain-adversarial learning.
Cite
Text
Acuna et al. "F-Domain Adversarial Learning: Theory and Algorithms." International Conference on Machine Learning, 2021.Markdown
[Acuna et al. "F-Domain Adversarial Learning: Theory and Algorithms." International Conference on Machine Learning, 2021.](https://mlanthology.org/icml/2021/acuna2021icml-fdomain/)BibTeX
@inproceedings{acuna2021icml-fdomain,
title = {{F-Domain Adversarial Learning: Theory and Algorithms}},
author = {Acuna, David and Zhang, Guojun and Law, Marc T. and Fidler, Sanja},
booktitle = {International Conference on Machine Learning},
year = {2021},
pages = {66-75},
volume = {139},
url = {https://mlanthology.org/icml/2021/acuna2021icml-fdomain/}
}