Invertible Hierarchical Generative Model for Images

Abstract

Normalizing flows (NFs) as generative models enjoy desirable properties such as exact invertibility and exact likelihood evaluation, while being efficient to sample from. These properties, however, come at the cost of heavy restrictions on the architecture. Due to these limitations, modeling multi-modal probability distributions can yield poor results even with low-dimensional data. Additionally, typical flow architectures employed on real image datasets produce samples with visible aliasing artifacts and limited variation. The latent decomposition of flow-models also falls short on that of competing methods, with uneven contribution to a decoded image. In this work we build an invertible generative model using conditional normalizing flows in a hierarchical fashion to circumvent the aforementioned limitations. We show that we can achieve superior sample quality among flow-based models with fewer parameters compared to the state of the art. We demonstrate ability to control individual levels of detail via the latent decomposition of our model.

Cite

Text

Timonen et al. "Invertible Hierarchical Generative Model for Images." Transactions on Machine Learning Research, 2023.

Markdown

[Timonen et al. "Invertible Hierarchical Generative Model for Images." Transactions on Machine Learning Research, 2023.](https://mlanthology.org/tmlr/2023/timonen2023tmlr-invertible/)

BibTeX

@article{timonen2023tmlr-invertible,
  title     = {{Invertible Hierarchical Generative Model for Images}},
  author    = {Timonen, Heikki and Aittala, Miika and Lehtinen, Jaakko},
  journal   = {Transactions on Machine Learning Research},
  year      = {2023},
  url       = {https://mlanthology.org/tmlr/2023/timonen2023tmlr-invertible/}
}