Hyperspherical Variational Auto-Encoders
Abstract
The Variational Auto-Encoder (VAE) is one of the most used unsupervised machine learning models. But although the default choice of a Gaussian distribution for both the prior and posterior represents a mathematically convenient distribution often leading to competitive results, we show that this parameterization fails to model data with a latent hyperspherical structure. To address this issue we propose using a von Mises-Fisher (vMF) distribution instead, leading to a hyperspherical latent space. Through a series of experiments we show how such a hyperspherical VAE, or $\mathcal{S}$-VAE, is more suitable for capturing data with a hyperspherical latent structure, while outperforming a normal, $\mathcal{N}$-VAE, in low dimensions on other data types.
Cite
Text
Davidson et al. "Hyperspherical Variational Auto-Encoders." Conference on Uncertainty in Artificial Intelligence, 2018.Markdown
[Davidson et al. "Hyperspherical Variational Auto-Encoders." Conference on Uncertainty in Artificial Intelligence, 2018.](https://mlanthology.org/uai/2018/davidson2018uai-hyperspherical/)BibTeX
@inproceedings{davidson2018uai-hyperspherical,
title = {{Hyperspherical Variational Auto-Encoders}},
author = {Davidson, Tim R. and Falorsi, Luca and De Cao, Nicola and Kipf, Thomas and Tomczak, Jakub M.},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2018},
pages = {856-865},
url = {https://mlanthology.org/uai/2018/davidson2018uai-hyperspherical/}
}