Random Feature Stein Discrepancies

Abstract

Computable Stein discrepancies have been deployed for a variety of applications, ranging from sampler selection in posterior inference to approximate Bayesian inference to goodness-of-fit testing. Existing convergence-determining Stein discrepancies admit strong theoretical guarantees but suffer from a computational cost that grows quadratically in the sample size. While linear-time Stein discrepancies have been proposed for goodness-of-fit testing, they exhibit avoidable degradations in testing power—even when power is explicitly optimized. To address these shortcomings, we introduce feature Stein discrepancies (ΦSDs), a new family of quality measures that can be cheaply approximated using importance sampling. We show how to construct ΦSDs that provably determine the convergence of a sample to its target and develop high-accuracy approximations—random ΦSDs (RΦSDs)—which are computable in near-linear time. In our experiments with sampler selection for approximate posterior inference and goodness-of-fit testing, RΦSDs perform as well or better than quadratic-time KSDs while being orders of magnitude faster to compute.

Cite

Text

Huggins and Mackey. "Random Feature Stein Discrepancies." Neural Information Processing Systems, 2018.

Markdown

[Huggins and Mackey. "Random Feature Stein Discrepancies." Neural Information Processing Systems, 2018.](https://mlanthology.org/neurips/2018/huggins2018neurips-random/)

BibTeX

@inproceedings{huggins2018neurips-random,
  title     = {{Random Feature Stein Discrepancies}},
  author    = {Huggins, Jonathan and Mackey, Lester},
  booktitle = {Neural Information Processing Systems},
  year      = {2018},
  pages     = {1899-1909},
  url       = {https://mlanthology.org/neurips/2018/huggins2018neurips-random/}
}