Training Normalizing Flows from Dependent Data
Abstract
Normalizing flows are powerful non-parametric statistical models that function as a hybrid between density estimators and generative models. Current learning algorithms for normalizing flows assume that data points are sampled independently, an assumption that is frequently violated in practice, which may lead to erroneous density estimation and data generation. We propose a likelihood objective of normalizing flows incorporating dependencies between the data points, for which we derive a flexible and efficient learning algorithm suitable for different dependency structures. We show that respecting dependencies between observations can improve empirical results on both synthetic and real-world data, and leads to higher statistical power in a downstream application to genome-wide association studies.
Cite
Text
Kirchler et al. "Training Normalizing Flows from Dependent Data." International Conference on Machine Learning, 2023.Markdown
[Kirchler et al. "Training Normalizing Flows from Dependent Data." International Conference on Machine Learning, 2023.](https://mlanthology.org/icml/2023/kirchler2023icml-training/)BibTeX
@inproceedings{kirchler2023icml-training,
title = {{Training Normalizing Flows from Dependent Data}},
author = {Kirchler, Matthias and Lippert, Christoph and Kloft, Marius},
booktitle = {International Conference on Machine Learning},
year = {2023},
pages = {17105-17121},
volume = {202},
url = {https://mlanthology.org/icml/2023/kirchler2023icml-training/}
}