Adversarial Mixup Resynthesizers

Abstract

In this paper, we explore new approaches to combining information encoded within the learned representations of autoencoders. We explore models that are capable of combining the attributes of multiple inputs such that a resynthesised output is trained to fool an adversarial discriminator for real versus synthesised data. Furthermore, we explore the use of such an architecture in the context of semi-supervised learning, where we learn a mixing function whose objective is to produce interpolations of hidden states, or masked combinations of latent representations that are consistent with a conditioned class label. We show quantitative and qualitative evidence that such a formulation is an interesting avenue of research.

Cite

Text

Beckham et al. "Adversarial Mixup Resynthesizers." ICLR 2019 Workshops: DeepGenStruct, 2019.

Markdown

[Beckham et al. "Adversarial Mixup Resynthesizers." ICLR 2019 Workshops: DeepGenStruct, 2019.](https://mlanthology.org/iclrw/2019/beckham2019iclrw-adversarial/)

BibTeX

@inproceedings{beckham2019iclrw-adversarial,
  title     = {{Adversarial Mixup Resynthesizers}},
  author    = {Beckham, Christopher and Honari, Sina and Lamb, Alex and Verma, Vikas and Ghadiri, Farnoosh and Hjelm, R Devon and Pal, Christopher},
  booktitle = {ICLR 2019 Workshops: DeepGenStruct},
  year      = {2019},
  url       = {https://mlanthology.org/iclrw/2019/beckham2019iclrw-adversarial/}
}