Wasserstein Learning of Deep Generative Point Process Models
Abstract
Point processes are becoming very popular in modeling asynchronous sequential data due to their sound mathematical foundation and strength in modeling a variety of real-world phenomena. Currently, they are often characterized via intensity function which limits model's expressiveness due to unrealistic assumptions on its parametric form used in practice. Furthermore, they are learned via maximum likelihood approach which is prone to failure in multi-modal distributions of sequences. In this paper, we propose an intensity-free approach for point processes modeling that transforms nuisance processes to a target one. Furthermore, we train the model using a likelihood-free leveraging Wasserstein distance between point processes. Experiments on various synthetic and real-world data substantiate the superiority of the proposed point process model over conventional ones.
Cite
Text
Xiao et al. "Wasserstein Learning of Deep Generative Point Process Models." Neural Information Processing Systems, 2017.Markdown
[Xiao et al. "Wasserstein Learning of Deep Generative Point Process Models." Neural Information Processing Systems, 2017.](https://mlanthology.org/neurips/2017/xiao2017neurips-wasserstein/)BibTeX
@inproceedings{xiao2017neurips-wasserstein,
title = {{Wasserstein Learning of Deep Generative Point Process Models}},
author = {Xiao, Shuai and Farajtabar, Mehrdad and Ye, Xiaojing and Yan, Junchi and Song, Le and Zha, Hongyuan},
booktitle = {Neural Information Processing Systems},
year = {2017},
pages = {3247-3257},
url = {https://mlanthology.org/neurips/2017/xiao2017neurips-wasserstein/}
}