Stein $\Pi$-Importance Sampling
Abstract
Stein discrepancies have emerged as a powerful tool for retrospective improvement of Markov chain Monte Carlo output. However, the question of how to design Markov chains that are well-suited to such post-processing has yet to be addressed. This paper studies Stein importance sampling, in which weights are assigned to the states visited by a $\Pi$-invariant Markov chain to obtain a consistent approximation of $P$, the intended target. Surprisingly, the optimal choice of $\Pi$ is not identical to the target $P$; we therefore propose an explicit construction for $\Pi$ based on a novel variational argument. Explicit conditions for convergence of Stein $\Pi$-Importance Sampling are established. For $\approx 70$% of tasks in the PosteriorDB benchmark, a significant improvement over the analogous post-processing of $P$-invariant Markov chains is reported.
Cite
Text
Wang et al. "Stein $\Pi$-Importance Sampling." Neural Information Processing Systems, 2023.Markdown
[Wang et al. "Stein $\Pi$-Importance Sampling." Neural Information Processing Systems, 2023.](https://mlanthology.org/neurips/2023/wang2023neurips-stein/)BibTeX
@inproceedings{wang2023neurips-stein,
title = {{Stein $\Pi$-Importance Sampling}},
author = {Wang, Congye and Chen, Ye and Kanagawa, Heishiro and Oates, Chris J},
booktitle = {Neural Information Processing Systems},
year = {2023},
url = {https://mlanthology.org/neurips/2023/wang2023neurips-stein/}
}