Adaptation Based on Generalized Discrepancy

Abstract

We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm, (DM), previously shown to outperform a number of algorithms for this problem. Unlike many previously proposed solutions for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. Instead, the reweighting depends on the hypothesis sought. The algorithm is derived from a less conservative notion of discrepancy than the DM algorithm called generalized discrepancy. We present a detailed description of our algorithm and show that it can be formulated as a convex optimization problem. We also give a detailed theoretical analysis of its learning guarantees which helps us select its parameters. Finally, we report the results of experiments demonstrating that it improves upon discrepancy minimization.

Cite

Text

Cortes et al. "Adaptation Based on Generalized Discrepancy." Journal of Machine Learning Research, 2019.

Markdown

[Cortes et al. "Adaptation Based on Generalized Discrepancy." Journal of Machine Learning Research, 2019.](https://mlanthology.org/jmlr/2019/cortes2019jmlr-adaptation/)

BibTeX

@article{cortes2019jmlr-adaptation,
  title     = {{Adaptation Based on Generalized Discrepancy}},
  author    = {Cortes, Corinna and Mohri, Mehryar and Medina, Andrés Muñoz},
  journal   = {Journal of Machine Learning Research},
  year      = {2019},
  pages     = {1-30},
  volume    = {20},
  url       = {https://mlanthology.org/jmlr/2019/cortes2019jmlr-adaptation/}
}