Towards Consistent Variational Auto-Encoding (Student Abstract)

Abstract

Variational autoencoders (VAEs) have been a successful approach to learning meaningful representations of data in an unsupervised manner. However, suboptimal representations are often learned because the approximate inference model fails to match the true posterior of the generative model, i.e. an inconsistency exists between the learnt inference and generative models. In this paper, we introduce a novel consistency loss that directly requires the encoding of the reconstructed data point to match the encoding of the original data, leading to better representations. Through experiments on MNIST and Fashion MNIST, we demonstrate the existence of the inconsistency in VAE learning and that our method can effectively reduce such inconsistency.

Cite

Text

Liu et al. "Towards Consistent Variational Auto-Encoding (Student Abstract)." AAAI Conference on Artificial Intelligence, 2020. doi:10.1609/AAAI.V34I10.7207

Markdown

[Liu et al. "Towards Consistent Variational Auto-Encoding (Student Abstract)." AAAI Conference on Artificial Intelligence, 2020.](https://mlanthology.org/aaai/2020/liu2020aaai-consistent/) doi:10.1609/AAAI.V34I10.7207

BibTeX

@inproceedings{liu2020aaai-consistent,
  title     = {{Towards Consistent Variational Auto-Encoding (Student Abstract)}},
  author    = {Liu, Yijing and Lin, Shuyu and Clark, Ronald},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {13869-13870},
  doi       = {10.1609/AAAI.V34I10.7207},
  url       = {https://mlanthology.org/aaai/2020/liu2020aaai-consistent/}
}