The Balancing Principle for Parameter Choice in Distance-Regularized Domain Adaptation
Abstract
We address the unsolved algorithm design problem of choosing a justified regularization parameter in unsupervised domain adaptation. This problem is intriguing as no labels are available in the target domain. Our approach starts with the observation that the widely-used method of minimizing the source error, penalized by a distance measure between source and target feature representations, shares characteristics with regularized ill-posed inverse problems. Regularization parameters in inverse problems are optimally chosen by the fundamental principle of balancing approximation and sampling errors. We use this principle to balance learning errors and domain distance in a target error bound. As a result, we obtain a theoretically justified rule for the choice of the regularization parameter. In contrast to the state of the art, our approach allows source and target distributions with disjoint supports. An empirical comparative study on benchmark datasets underpins the performance of our approach.
Cite
Text
Zellinger et al. "The Balancing Principle for Parameter Choice in Distance-Regularized Domain Adaptation." Neural Information Processing Systems, 2021.Markdown
[Zellinger et al. "The Balancing Principle for Parameter Choice in Distance-Regularized Domain Adaptation." Neural Information Processing Systems, 2021.](https://mlanthology.org/neurips/2021/zellinger2021neurips-balancing/)BibTeX
@inproceedings{zellinger2021neurips-balancing,
title = {{The Balancing Principle for Parameter Choice in Distance-Regularized Domain Adaptation}},
author = {Zellinger, Werner and Shepeleva, Natalia and Dinu, Marius-Constantin and Eghbal-zadeh, Hamid and Nguyen, Hoan Duc and Nessler, Bernhard and Pereverzyev, Sergei and Moser, Bernhard A.},
booktitle = {Neural Information Processing Systems},
year = {2021},
url = {https://mlanthology.org/neurips/2021/zellinger2021neurips-balancing/}
}