Why Be Adversarial? Let's Cooperate!: Cooperative Dataset Alignment via JSD Upper Bound

Abstract

Unsupervised dataset alignment estimates a transformation that maps two or more source domains to a shared aligned domain given only the domain datasets. This task has many applications including generative modeling, unsupervised domain adaptation, and socially aware learning. Most prior works use adversarial learning (i.e., min-max optimization), which can be challenging to optimize and evaluate. A few recent works explore non-adversarial flow-based (i.e., invertible) approaches, but they lack a unified perspective. Therefore, we propose to unify and generalize previous flow-based approaches under a single non-adversarial framework, which we prove is equivalent to minimizing an upper bound on the Jensen-Shannon Divergence (JSD). Importantly, our problem reduces to a min-min, i.e., cooperative, problem and can provide a natural evaluation metric for unsupervised dataset alignment.

Cite

Text

Cho et al. "Why Be Adversarial? Let's Cooperate!: Cooperative Dataset Alignment via JSD Upper Bound." ICML 2021 Workshops: INNF, 2021.

Markdown

[Cho et al. "Why Be Adversarial? Let's Cooperate!: Cooperative Dataset Alignment via JSD Upper Bound." ICML 2021 Workshops: INNF, 2021.](https://mlanthology.org/icmlw/2021/cho2021icmlw-adversarial/)

BibTeX

@inproceedings{cho2021icmlw-adversarial,
  title     = {{Why Be Adversarial? Let's Cooperate!: Cooperative Dataset Alignment via JSD Upper Bound}},
  author    = {Cho, Wonwoong and Gong, Ziyu and Inouye, David I.},
  booktitle = {ICML 2021 Workshops: INNF},
  year      = {2021},
  url       = {https://mlanthology.org/icmlw/2021/cho2021icmlw-adversarial/}
}