Dual Contradistinctive Generative Autoencoder

Abstract

We present a new generative autoencoder model with dual contradistinctive losses to improve generative autoencoder that performs simultaneous inference (reconstruction) and synthesis (sampling). Our model, named dual contradistinctive generative autoencoder (DC-VAE), integrates an instance-level discriminative loss (maintaining the instancelevel fidelity for the reconstruction / synthesis) with a set-level adversarial loss (encouraging the set-level fidelity for the reconstruction/synthesis), both being contradistinctive. Extensive experimental results by DC-VAE across different resolutions including 32x32, 64x64, 128x128, and 512x512 are reported. The two contradistinctive losses in VAE work harmoniously in DC-VAE leading to a significant qualitative and quantitative performance enhancement over the baseline VAEs without architectural changes. State-of-the-art or competitive results among generative autoencoders for image reconstruction, image synthesis, image interpolation, and representation learning are observed. DC-VAE is a general-purpose VAE model, applicable to a wide variety of downstream tasks in computer vision and machine learning.

Cite

Text

Parmar et al. "Dual Contradistinctive Generative Autoencoder." Conference on Computer Vision and Pattern Recognition, 2021. doi:10.1109/CVPR46437.2021.00088

Markdown

[Parmar et al. "Dual Contradistinctive Generative Autoencoder." Conference on Computer Vision and Pattern Recognition, 2021.](https://mlanthology.org/cvpr/2021/parmar2021cvpr-dual/) doi:10.1109/CVPR46437.2021.00088

BibTeX

@inproceedings{parmar2021cvpr-dual,
  title     = {{Dual Contradistinctive Generative Autoencoder}},
  author    = {Parmar, Gaurav and Li, Dacheng and Lee, Kwonjoon and Tu, Zhuowen},
  booktitle = {Conference on Computer Vision and Pattern Recognition},
  year      = {2021},
  pages     = {823-832},
  doi       = {10.1109/CVPR46437.2021.00088},
  url       = {https://mlanthology.org/cvpr/2021/parmar2021cvpr-dual/}
}