Tuning-Free Coreset Markov Chain Monte Carlo via Hot DoG
Abstract
A Bayesian coreset is a small, weighted subset of a data set that replaces the full data during inference to reduce computational cost. The state-of-the-art coreset construction algorithm, Coreset Markov chain Monte Carlo (Coreset MCMC), uses draws from an adaptive Markov chain targeting the coreset posterior to train the coreset weights via stochastic gradient optimization. However, the quality of the constructed coreset, and thus the quality of its posterior approximation, is sensitive to the stochastic optimization learning rate. In this work, we propose a learning-rate-free stochastic gradient optimization procedure, Hot-start Distance over Gradient (Hot DoG), for training coreset weights in Coreset MCMC without user tuning effort. We provide a theoretical analysis of the convergence of the coreset weights produced by Hot DoG. We also provide empirical results demonstrate that Hot DoG provides higher quality posterior approximations than other learning-rate-free stochastic gradient methods, and performs competitively to optimally-tuned ADAM.
Cite
Text
Chen et al. "Tuning-Free Coreset Markov Chain Monte Carlo via Hot DoG." Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, 2025.Markdown
[Chen et al. "Tuning-Free Coreset Markov Chain Monte Carlo via Hot DoG." Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, 2025.](https://mlanthology.org/uai/2025/chen2025uai-tuningfree/)BibTeX
@inproceedings{chen2025uai-tuningfree,
title = {{Tuning-Free Coreset Markov Chain Monte Carlo via Hot DoG}},
author = {Chen, Naitong and Huggins, Jonathan H. and Campbell, Trevor},
booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence},
year = {2025},
pages = {647-672},
volume = {286},
url = {https://mlanthology.org/uai/2025/chen2025uai-tuningfree/}
}