Disentangling Factors of Variations Using Few Labels

Abstract

Learning disentangled representations is considered a promising research direction in representation learning. Recently, Locatello et al. (2018) demonstrated that the unsupervised learning of disentangled representations is theoretically impossible and that state-of-the-art methods, which are often unsupervised, require access to annotated examples to select good model runs. Yet, if we assume access to labels for model selection, it is not clear why we should not use them directly for training. In this paper, we first show that model selection using few labels is feasible. Then, as a proof-of-concept, we consider a simple semi-supervised method that directly uses the labels for training. We train more than 7000 models and empirically validate that collecting a handful of potentially noisy labels is sufficient to learn disentangled representations.

Cite

Text

Locatello et al. "Disentangling Factors of Variations Using Few Labels." ICLR 2019 Workshops: LLD, 2019.

Markdown

[Locatello et al. "Disentangling Factors of Variations Using Few Labels." ICLR 2019 Workshops: LLD, 2019.](https://mlanthology.org/iclrw/2019/locatello2019iclrw-disentangling/)

BibTeX

@inproceedings{locatello2019iclrw-disentangling,
  title     = {{Disentangling Factors of Variations Using Few Labels}},
  author    = {Locatello, Francesco and Tschannen, Michael and Bauer, Stefan and R¨¨ätsch, Gunnar and Schölkopf, Bernhard and Bachem, Olivier},
  booktitle = {ICLR 2019 Workshops: LLD},
  year      = {2019},
  url       = {https://mlanthology.org/iclrw/2019/locatello2019iclrw-disentangling/}
}