Frequentist Guarantees of Distributed (Non)-Bayesian Inference
Abstract
We establish frequentist properties, i.e., posterior consistency, asymptotic normality, and posterior contraction rates, for the distributed (non-)Bayesian inference problem for a set of agents connected over a network. These results are motivated by the need to analyze large, decentralized datasets, where distributed (non)-Bayesian inference has become a critical research area across multiple fields, including statistics, machine learning, and economics. Our results show that, under appropriate assumptions on the communication graph, distributed (non)-Bayesian inference retains parametric efficiency while enhancing robustness in uncertainty quantification. We also explore the trade-off between statistical efficiency and communication efficiency by examining how the design and size of the communication graph impact the posterior contraction rate. Furthermore, we extend our analysis to time-varying graphs and apply our results to exponential family models, distributed logistic regression, and decentralized detection models.
Cite
Text
Wu and Uribe. "Frequentist Guarantees of Distributed (Non)-Bayesian Inference." Journal of Machine Learning Research, 2025.Markdown
[Wu and Uribe. "Frequentist Guarantees of Distributed (Non)-Bayesian Inference." Journal of Machine Learning Research, 2025.](https://mlanthology.org/jmlr/2025/wu2025jmlr-frequentist/)BibTeX
@article{wu2025jmlr-frequentist,
title = {{Frequentist Guarantees of Distributed (Non)-Bayesian Inference}},
author = {Wu, Bohan and Uribe, César A.},
journal = {Journal of Machine Learning Research},
year = {2025},
pages = {1-65},
volume = {26},
url = {https://mlanthology.org/jmlr/2025/wu2025jmlr-frequentist/}
}