Learning Deep Parsimonious Representations
Abstract
In this paper we aim at facilitating generalization for deep networks while supporting interpretability of the learned representations. Towards this goal, we propose a clustering based regularization that encourages parsimonious representations. Our k-means style objective is easy to optimize and flexible supporting various forms of clustering, including sample and spatial clustering as well as co-clustering. We demonstrate the effectiveness of our approach on the tasks of unsupervised learning, classification, fine grained categorization and zero-shot learning.
Cite
Text
Liao et al. "Learning Deep Parsimonious Representations." Neural Information Processing Systems, 2016.Markdown
[Liao et al. "Learning Deep Parsimonious Representations." Neural Information Processing Systems, 2016.](https://mlanthology.org/neurips/2016/liao2016neurips-learning/)BibTeX
@inproceedings{liao2016neurips-learning,
title = {{Learning Deep Parsimonious Representations}},
author = {Liao, Renjie and Schwing, Alex and Zemel, Richard and Urtasun, Raquel},
booktitle = {Neural Information Processing Systems},
year = {2016},
pages = {5076-5084},
url = {https://mlanthology.org/neurips/2016/liao2016neurips-learning/}
}