Manifold Preserving Hierarchical Topic Models for Quantization and Approximation
Abstract
We present two complementary topic models to address the analysis of mixture data lying on manifolds. First, we propose a quantization method with an additional mid-layer latent variable, which selects only data points that best preserve the manifold structure of the input data. In order to address the case of modeling all the in-between parts of that manifold using this reduced representation of the input, we introduce a new model that provides a manifold-aware interpolation method. We demonstrate the advantages of these models with experiments on the hand-written digit recognition and the speech source separation tasks.
Cite
Text
Kim and Smaragdis. "Manifold Preserving Hierarchical Topic Models for Quantization and Approximation." International Conference on Machine Learning, 2013.Markdown
[Kim and Smaragdis. "Manifold Preserving Hierarchical Topic Models for Quantization and Approximation." International Conference on Machine Learning, 2013.](https://mlanthology.org/icml/2013/kim2013icml-manifold/)BibTeX
@inproceedings{kim2013icml-manifold,
title = {{Manifold Preserving Hierarchical Topic Models for Quantization and Approximation}},
author = {Kim, Minje and Smaragdis, Paris},
booktitle = {International Conference on Machine Learning},
year = {2013},
pages = {1373-1381},
volume = {28},
url = {https://mlanthology.org/icml/2013/kim2013icml-manifold/}
}