The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement
Abstract
Existing popular methods for disentanglement rely on hand-picked priors and complex encoder-based architectures. In this paper, we propose the Hessian Penalty, a simple regularization function that encourages the input Hessian of a function to be diagonal. Our method is completely model-agnostic and can be applied to any deep generator with just a few lines of code. We show that our method automatically uncovers meaningful factors of variation in the standard basis when applied to ProgressiveGAN across several datasets. Additionally, we demonstrate that our regularization term can be used to identify interpretable directions in BigGAN's latent space in a fully unsupervised fashion. Finally, we provide provide empirical evidence that our regularization term encourages sparsity when applied to overparameterized latent spaces.
Cite
Text
Peebles et al. "The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement." Proceedings of the European Conference on Computer Vision (ECCV), 2020. doi:10.1007/978-3-030-58539-6_35Markdown
[Peebles et al. "The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement." Proceedings of the European Conference on Computer Vision (ECCV), 2020.](https://mlanthology.org/eccv/2020/peebles2020eccv-hessian/) doi:10.1007/978-3-030-58539-6_35BibTeX
@inproceedings{peebles2020eccv-hessian,
title = {{The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement}},
author = {Peebles, William and Peebles, John and Zhu, Jun-Yan and Efros, Alexei and Torralba, Antonio},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2020},
doi = {10.1007/978-3-030-58539-6_35},
url = {https://mlanthology.org/eccv/2020/peebles2020eccv-hessian/}
}