A Kernelized Stein Discrepancy for Goodness-of-Fit Tests
Abstract
We derive a new discrepancy statistic for measuring differences between two probability distributions based on combining Stein’s identity and the reproducing kernel Hilbert space theory. We apply our result to test how well a probabilistic model fits a set of observations, and derive a new class of powerful goodness-of-fit tests that are widely applicable for complex and high dimensional distributions, even for those with computationally intractable normalization constants. Both theoretical and empirical properties of our methods are studied thoroughly.
Cite
Text
Liu et al. "A Kernelized Stein Discrepancy for Goodness-of-Fit Tests." International Conference on Machine Learning, 2016.Markdown
[Liu et al. "A Kernelized Stein Discrepancy for Goodness-of-Fit Tests." International Conference on Machine Learning, 2016.](https://mlanthology.org/icml/2016/liu2016icml-kernelized/)BibTeX
@inproceedings{liu2016icml-kernelized,
title = {{A Kernelized Stein Discrepancy for Goodness-of-Fit Tests}},
author = {Liu, Qiang and Lee, Jason and Jordan, Michael},
booktitle = {International Conference on Machine Learning},
year = {2016},
pages = {276-284},
volume = {48},
url = {https://mlanthology.org/icml/2016/liu2016icml-kernelized/}
}