Dimensionality Reduction for Representing the Knowledge of Probabilistic Models
Abstract
Most deep learning models rely on expressive high-dimensional representations to achieve good performance on tasks such as classification. However, the high dimensionality of these representations makes them difficult to interpret and prone to over-fitting. We propose a simple, intuitive and scalable dimension reduction framework that takes into account the soft probabilistic interpretation of standard deep models for classification. When applying our framework to visualization, our representations more accurately reflect inter-class distances than standard visualization techniques such as t-SNE. We show experimentally that our framework improves generalization performance to unseen categories in zero-shot learning. We also provide a finite sample error upper bound guarantee for the method.
Cite
Text
Law et al. "Dimensionality Reduction for Representing the Knowledge of Probabilistic Models." International Conference on Learning Representations, 2019.Markdown
[Law et al. "Dimensionality Reduction for Representing the Knowledge of Probabilistic Models." International Conference on Learning Representations, 2019.](https://mlanthology.org/iclr/2019/law2019iclr-dimensionality/)BibTeX
@inproceedings{law2019iclr-dimensionality,
title = {{Dimensionality Reduction for Representing the Knowledge of Probabilistic Models}},
author = {Law, Marc T and Snell, Jake and Farahmand, Amir-massoud and Urtasun, Raquel and Zemel, Richard S},
booktitle = {International Conference on Learning Representations},
year = {2019},
url = {https://mlanthology.org/iclr/2019/law2019iclr-dimensionality/}
}