Bayesian Rashomon Sets for Model Uncertainty: A Critical Comparison

Abstract

In statistical analyses, both observational and experimental, understanding how outcomes vary with covariates is crucial. Traditional methods like Bayesian and frequentist regression, regression trees, and model averaging partition data into homogeneous pools to summarize outcomes. However, these methods either focus on a single optimal partition or sample from all possible partitions, often missing high-quality ones or including low-support partitions. A recently developed Bayesian approach, Rashomon Partition Sets (RPSs), enumerates partitions with posterior densities close to the maximum a posteriori (MAP) partition, capturing uncertainty among high-evidence partitions. RPSs adhere to two principles: scientific coherence and simplicity, using a minimax optimal $\ell_0$ prior without additional dependence assumptions. In this paper, we critically compare the RPS approach with three commonly used alternatives: Bayesian Model Averaging, Bayesian/frequentist regularization, and Causal Random Forests.

Cite

Text

Venkateswaran et al. "Bayesian Rashomon Sets for Model Uncertainty: A Critical Comparison." NeurIPS 2024 Workshops: BDU, 2024.

Markdown

[Venkateswaran et al. "Bayesian Rashomon Sets for Model Uncertainty: A Critical Comparison." NeurIPS 2024 Workshops: BDU, 2024.](https://mlanthology.org/neuripsw/2024/venkateswaran2024neuripsw-bayesian/)

BibTeX

@inproceedings{venkateswaran2024neuripsw-bayesian,
  title     = {{Bayesian Rashomon Sets for Model Uncertainty: A Critical Comparison}},
  author    = {Venkateswaran, Aparajithan and Sankar, Anirudh and Chandrasekhar, Arun and McCormick, Tyler},
  booktitle = {NeurIPS 2024 Workshops: BDU},
  year      = {2024},
  url       = {https://mlanthology.org/neuripsw/2024/venkateswaran2024neuripsw-bayesian/}
}