Discrete Flows: Invertible Generative Models of Discrete Data
Abstract
While normalizing flows have led to significant advances in modeling high-dimensional continuous distributions, their applicability to discrete distributions remains unknown. In this paper, we show that flows can in fact be extended to discrete events---and under a simple change-of-variables formula not requiring log-determinant-Jacobian computations. Discrete flows have numerous applications. We consider two flow architectures: discrete autoregressive flows that enable bidirectionality, allowing, for example, tokens in text to depend on both left-to-right and right-to-left contexts in an exact language model; and discrete bipartite flows that enable efficient non-autoregressive generation as in RealNVP. Empirically, we find that discrete autoregressive flows outperform autoregressive baselines on synthetic discrete distributions, an addition task, and Potts models; and bipartite flows can obtain competitive performance with autoregressive baselines on character-level language modeling for Penn Tree Bank and text8.
Cite
Text
Tran et al. "Discrete Flows: Invertible Generative Models of Discrete Data." Neural Information Processing Systems, 2019.Markdown
[Tran et al. "Discrete Flows: Invertible Generative Models of Discrete Data." Neural Information Processing Systems, 2019.](https://mlanthology.org/neurips/2019/tran2019neurips-discrete/)BibTeX
@inproceedings{tran2019neurips-discrete,
title = {{Discrete Flows: Invertible Generative Models of Discrete Data}},
author = {Tran, Dustin and Vafa, Keyon and Agrawal, Kumar and Dinh, Laurent and Poole, Ben},
booktitle = {Neural Information Processing Systems},
year = {2019},
pages = {14719-14728},
url = {https://mlanthology.org/neurips/2019/tran2019neurips-discrete/}
}