Using Perturbation to Improve Goodness-of-Fit Tests Based on Kernelized Stein Discrepancy

Abstract

Kernelized Stein discrepancy (KSD) is a score-based discrepancy widely employed in goodness-of-fit tests. It is applicable even when the target distribution has an unknown normalising factor, such as in Bayesian analysis. We show theoretically and empirically that the power of the KSD test can be low when the target distribution has well-separated modes, which is due to insufficient data in regions where the score functions of the alternative and the target distributions differ the most. To improve its test power, we propose to perturb the target and alternative distributions before applying the KSD test. The perturbation uses a Markov transition kernel that leaves the target invariant but perturbs alternatives. We provide numerical evidence that the proposed approach can lead to a substantially higher power than the KSD test when the target and the alternative are mixture distributions that differ only in mixing weights.

Cite

Text

Liu et al. "Using Perturbation to Improve Goodness-of-Fit Tests Based on Kernelized Stein Discrepancy." NeurIPS 2022 Workshops: SBM, 2022.

Markdown

[Liu et al. "Using Perturbation to Improve Goodness-of-Fit Tests Based on Kernelized Stein Discrepancy." NeurIPS 2022 Workshops: SBM, 2022.](https://mlanthology.org/neuripsw/2022/liu2022neuripsw-using/)

BibTeX

@inproceedings{liu2022neuripsw-using,
  title     = {{Using Perturbation to Improve Goodness-of-Fit Tests Based on Kernelized Stein Discrepancy}},
  author    = {Liu, Xing and Duncan, Andrew and Gandy, Axel},
  booktitle = {NeurIPS 2022 Workshops: SBM},
  year      = {2022},
  url       = {https://mlanthology.org/neuripsw/2022/liu2022neuripsw-using/}
}