A Comparison of Latent Space Modeling Techniques in a Plain-Vanilla Autoencoder Setting
Abstract
Abstract By sampling from the latent space of an autoencoder and decoding the latent space samples to the original data space, any autoencoder can be turned into a generative model. For this to work, it is necessary to model the latent space with a distribution from which samples can be obtained. Several simple possibilities such as kernel density estimates or a Gaussian distribution and more sophisticated ones such as Gaussian mixture models, copula models, and normalization flows can be thought of and have been tried recently. In a plain-vanilla autoencoder setting, this study aims to discuss, assess, and compare various techniques that can be used to capture the latent space so that an autoencoder can become a generative model. Furthermore, we provide insights into further aspects of these methods, such as targeted sampling or synthesizing new data with specific features.
Cite
Text
Kächele et al. "A Comparison of Latent Space Modeling Techniques in a Plain-Vanilla Autoencoder Setting." Machine Learning, 2025. doi:10.1007/S10994-025-06784-3Markdown
[Kächele et al. "A Comparison of Latent Space Modeling Techniques in a Plain-Vanilla Autoencoder Setting." Machine Learning, 2025.](https://mlanthology.org/mlj/2025/kachele2025mlj-comparison/) doi:10.1007/S10994-025-06784-3BibTeX
@article{kachele2025mlj-comparison,
title = {{A Comparison of Latent Space Modeling Techniques in a Plain-Vanilla Autoencoder Setting}},
author = {Kächele, Fabian and Coblenz, Maximilian and Grothe, Oliver},
journal = {Machine Learning},
year = {2025},
pages = {151},
doi = {10.1007/S10994-025-06784-3},
volume = {114},
url = {https://mlanthology.org/mlj/2025/kachele2025mlj-comparison/}
}