Compressing Latent Space via Least Volume

Abstract

This paper introduces Least Volume---a simple yet effective regularization inspired by geometric intuition---that can reduce the necessary number of latent dimensions needed by an autoencoder without requiring any prior knowledge of the intrinsic dimensionality of the dataset. We show that the Lipschitz continuity of the decoder is the key to making it work, provide a proof that PCA is just a linear special case of it, and reveal that it has a similar PCA-like importance ordering effect when applied to nonlinear models. We demonstrate the intuition behind the regularization on some pedagogical toy problems, and its effectiveness on several benchmark problems, including MNIST, CIFAR-10 and CelebA.

Cite

Text

Chen and Fuge. "Compressing Latent Space via Least Volume." International Conference on Learning Representations, 2024.

Markdown

[Chen and Fuge. "Compressing Latent Space via Least Volume." International Conference on Learning Representations, 2024.](https://mlanthology.org/iclr/2024/chen2024iclr-compressing/)

BibTeX

@inproceedings{chen2024iclr-compressing,
  title     = {{Compressing Latent Space via Least Volume}},
  author    = {Chen, Qiuyi and Fuge, Mark},
  booktitle = {International Conference on Learning Representations},
  year      = {2024},
  url       = {https://mlanthology.org/iclr/2024/chen2024iclr-compressing/}
}