On Adversarial Mixup Resynthesis

Abstract

In this paper, we explore new approaches to combining information encoded within the learned representations of auto-encoders. We explore models that are capable of combining the attributes of multiple inputs such that a resynthesised output is trained to fool an adversarial discriminator for real versus synthesised data. Furthermore, we explore the use of such an architecture in the context of semi-supervised learning, where we learn a mixing function whose objective is to produce interpolations of hidden states, or masked combinations of latent representations that are consistent with a conditioned class label. We show quantitative and qualitative evidence that such a formulation is an interesting avenue of research.

Cite

Text

Beckham et al. "On Adversarial Mixup Resynthesis." Neural Information Processing Systems, 2019.

Markdown

[Beckham et al. "On Adversarial Mixup Resynthesis." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/beckham2019neurips-adversarial/)

BibTeX

@inproceedings{beckham2019neurips-adversarial,
  title     = {{On Adversarial Mixup Resynthesis}},
  author    = {Beckham, Christopher and Honari, Sina and Verma, Vikas and Lamb, Alex M and Ghadiri, Farnoosh and Hjelm, R Devon and Bengio, Yoshua and Pal, Chris},
  booktitle = {Neural Information Processing Systems},
  year      = {2019},
  pages     = {4346-4357},
  url       = {https://mlanthology.org/neurips/2019/beckham2019neurips-adversarial/}
}