Fisher Information Lower Bounds for Sampling

Abstract

We prove two lower bounds for the complexity of non-log-concave sampling within the framework of Balasubramanian et al. (2022), who introduced the use of Fisher information ($\mathsf{FI}$) bounds as a notion of approximate first-order stationarity in sampling. Our first lower bound shows that averaged Langevin Monte Carlo (LMC) is optimal for the regime of large $\mathsf{FI}$ by reducing the problem of finding stationary points in non-convex optimization to sampling. Our second lower bound shows that in the regime of small $\mathsf{FI}$, obtaining a $\mathsf{FI}$ of at most $\varepsilon^2$ from the target distribution requires $\text{poly}(1/\varepsilon)$ queries, which is surprising as it rules out the existence of high-accuracy algorithms (e.g., algorithms using Metropolis{–}Hastings filters) in this context.

Cite

Text

Chewi et al. "Fisher Information Lower Bounds for Sampling." Proceedings of The 34th International Conference on Algorithmic Learning Theory, 2023.

Markdown

[Chewi et al. "Fisher Information Lower Bounds for Sampling." Proceedings of The 34th International Conference on Algorithmic Learning Theory, 2023.](https://mlanthology.org/alt/2023/chewi2023alt-fisher/)

BibTeX

@inproceedings{chewi2023alt-fisher,
  title     = {{Fisher Information Lower Bounds for Sampling}},
  author    = {Chewi, Sinho and Gerber, Patrik and Lee, Holden and Lu, Chen},
  booktitle = {Proceedings of The 34th International Conference on Algorithmic Learning Theory},
  year      = {2023},
  pages     = {375-410},
  volume    = {201},
  url       = {https://mlanthology.org/alt/2023/chewi2023alt-fisher/}
}