Hybrid Mutual Information Lower-Bound Estimators for Representation Learning

Abstract

Self-supervised representation learning methods based on the principle of maximizing mutual information have been successful in unsupervised learning of visual representations. These approaches are low-variance mutual information lower bound estimators, yet the lack of distributional assumptions prevent them from learning certain important information such as texture. Estimators that are based on distributional assumptions bypass this issue with autoencoders but they tend to have worse performance on downstream classification. To this end, we consider a hybrid approach that incorporates both the distribution-free contrastive lower bound and the distribution-based autoencoder lower bound. We illustrate that with one set of representations, the hybrid approach is able to achieve good performance on multiple downstream tasks such as classification, reconstruction, and generation.

Cite

Text

Sinha et al. "Hybrid Mutual Information Lower-Bound Estimators for Representation Learning." ICLR 2021 Workshops: Neural_Compression, 2021.

Markdown

[Sinha et al. "Hybrid Mutual Information Lower-Bound Estimators for Representation Learning." ICLR 2021 Workshops: Neural_Compression, 2021.](https://mlanthology.org/iclrw/2021/sinha2021iclrw-hybrid/)

BibTeX

@inproceedings{sinha2021iclrw-hybrid,
  title     = {{Hybrid Mutual Information Lower-Bound Estimators for Representation Learning}},
  author    = {Sinha, Abhishek and Song, Jiaming and Ermon, Stefano},
  booktitle = {ICLR 2021 Workshops: Neural_Compression},
  year      = {2021},
  url       = {https://mlanthology.org/iclrw/2021/sinha2021iclrw-hybrid/}
}