Learning Sparse Latent Representations with the Deep Copula Information Bottleneck
Abstract
Deep latent variable models are powerful tools for representation learning. In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that circumvents them. To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent space. Building on that, we show how this transformation translates to sparsity of the latent space in the new model. We evaluate our method on artificial and real data.
Cite
Text
Wieczorek et al. "Learning Sparse Latent Representations with the Deep Copula Information Bottleneck." International Conference on Learning Representations, 2018.Markdown
[Wieczorek et al. "Learning Sparse Latent Representations with the Deep Copula Information Bottleneck." International Conference on Learning Representations, 2018.](https://mlanthology.org/iclr/2018/wieczorek2018iclr-learning/)BibTeX
@inproceedings{wieczorek2018iclr-learning,
title = {{Learning Sparse Latent Representations with the Deep Copula Information Bottleneck}},
author = {Wieczorek, Aleksander and Wieser, Mario and Murezzan, Damian and Roth, Volker},
booktitle = {International Conference on Learning Representations},
year = {2018},
url = {https://mlanthology.org/iclr/2018/wieczorek2018iclr-learning/}
}