BasisVAE: Translation-Invariant Feature-Level Clustering with Variational Autoencoders
Abstract
Variational Autoencoders (VAEs) provide a flexible and scalable framework for non-linear dimensionality reduction. However, in application domains such as genomics where data sets are typically tabular and high-dimensional, a black-box approach to dimensionality reduction does not provide sufficient insights. Common data analysis workflows additionally use clustering techniques to identify groups of similar features. This usually leads to a two-stage process, however, it would be desirable to construct a joint modelling framework for simultaneous dimensionality reduction and clustering of features. In this paper, we propose to achieve this through the BasisVAE: a combination of the VAE and a probabilistic clustering prior, which lets us learn a one-hot basis function representation as part of the decoder network. Furthermore, for scenarios where not all features are aligned, we develop an extension to handle translation-invariant basis functions. We show how a collapsed variational inference scheme leads to scalable and efficient inference for BasisVAE, demonstrated on various toy examples as well as on single-cell gene expression data.
Cite
Text
Märtens and Yau. "BasisVAE: Translation-Invariant Feature-Level Clustering with Variational Autoencoders." Artificial Intelligence and Statistics, 2020.Markdown
[Märtens and Yau. "BasisVAE: Translation-Invariant Feature-Level Clustering with Variational Autoencoders." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/martens2020aistats-basisvae/)BibTeX
@inproceedings{martens2020aistats-basisvae,
title = {{BasisVAE: Translation-Invariant Feature-Level Clustering with Variational Autoencoders}},
author = {Märtens, Kaspar and Yau, Christopher},
booktitle = {Artificial Intelligence and Statistics},
year = {2020},
pages = {2928-2937},
volume = {108},
url = {https://mlanthology.org/aistats/2020/martens2020aistats-basisvae/}
}