A Wasserstein Minimum Velocity Approach to Learning Unnormalized Models
Abstract
Score matching provides an effective approach to learning flexible unnormalized models, but its scalability is limited by the need to evaluate a second-order derivative. In this paper, we present a scalable approximation to a general family of learning objectives including score matching, by observing a new connection between these objectives and Wasserstein gradient flows. We present applications with promise in learning neural density estimators on manifolds, and training implicit variational and Wasserstein auto-encoders with a manifold-valued prior.
Cite
Text
Wang et al. "A Wasserstein Minimum Velocity Approach to Learning Unnormalized Models." Artificial Intelligence and Statistics, 2020.Markdown
[Wang et al. "A Wasserstein Minimum Velocity Approach to Learning Unnormalized Models." Artificial Intelligence and Statistics, 2020.](https://mlanthology.org/aistats/2020/wang2020aistats-wasserstein/)BibTeX
@inproceedings{wang2020aistats-wasserstein,
title = {{A Wasserstein Minimum Velocity Approach to Learning Unnormalized Models}},
author = {Wang, Ziyu and Cheng, Shuyu and Yueru, Li and Zhu, Jun and Zhang, Bo},
booktitle = {Artificial Intelligence and Statistics},
year = {2020},
pages = {3728-3738},
volume = {108},
url = {https://mlanthology.org/aistats/2020/wang2020aistats-wasserstein/}
}