On $f$-Divergence Principled Domain Adaptation: An Improved Framework

Abstract

Unsupervised domain adaptation (UDA) plays a crucial role in addressing distribution shifts in machine learning. In this work, we improve the theoretical foundations of UDA proposed in Acuna et al. (2021) by refining their $f$-divergence-based discrepancy and additionally introducing a new measure, $f$-domain discrepancy ($f$-DD). By removing the absolute value function and incorporating a scaling parameter, $f$-DD obtains novel target error and sample complexity bounds, allowing us to recover previous KL-based results and bridging the gap between algorithms and theory presented in Acuna et al. (2021). Using a localization technique, we also develop a fast-rate generalization bound. Empirical results demonstrate the superior performance of $f$-DD-based learning algorithms over previous works in popular UDA benchmarks.

Cite

Text

Wang and Mao. "On $f$-Divergence Principled Domain Adaptation: An Improved Framework." Neural Information Processing Systems, 2024. doi:10.52202/079017-0215

Markdown

[Wang and Mao. "On $f$-Divergence Principled Domain Adaptation: An Improved Framework." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/wang2024neurips-fdivergence/) doi:10.52202/079017-0215

BibTeX

@inproceedings{wang2024neurips-fdivergence,
  title     = {{On $f$-Divergence Principled Domain Adaptation: An Improved Framework}},
  author    = {Wang, Ziqiao and Mao, Yongyi},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-0215},
  url       = {https://mlanthology.org/neurips/2024/wang2024neurips-fdivergence/}
}