Generalized Median of Means Principle for Bayesian Inference

Abstract

The topic of robustness is experiencing a resurgence of interest in the statistical and machine learning communities. In particular, robust algorithms making use of the so-called median of means estimator were shown to satisfy strong performance guarantees for many problems, including estimation of the mean, covariance structure as well as linear regression. In this work, we propose an extension of the median of means principle to the Bayesian framework, leading to the notion of the robust posterior distribution. In particular, we (a) quantify robustness of this posterior to outliers, (b) show that it satisfies a version of the Bernstein-von Mises theorem that connects Bayesian credible sets to the traditional confidence intervals, and (c) demonstrate that our approach performs well in applications.

Cite

Text

Minsker and Yao. "Generalized Median of Means Principle for Bayesian Inference." Machine Learning, 2025. doi:10.1007/S10994-025-06754-9

Markdown

[Minsker and Yao. "Generalized Median of Means Principle for Bayesian Inference." Machine Learning, 2025.](https://mlanthology.org/mlj/2025/minsker2025mlj-generalized/) doi:10.1007/S10994-025-06754-9

BibTeX

@article{minsker2025mlj-generalized,
  title     = {{Generalized Median of Means Principle for Bayesian Inference}},
  author    = {Minsker, Stanislav and Yao, Shunan},
  journal   = {Machine Learning},
  year      = {2025},
  pages     = {115},
  doi       = {10.1007/S10994-025-06754-9},
  volume    = {114},
  url       = {https://mlanthology.org/mlj/2025/minsker2025mlj-generalized/}
}