Learning Approximately Objective Priors
Abstract
Informative Bayesian priors are often difficult to elicit, and when this is the case, modelers usually turn to noninformative or objective priors. However, objective priors such as the Jeffreys and reference priors are not tractable to derive for many models of interest. We address this issue by proposing techniques for learning reference prior approximations: we select a parametric family and optimize a black-box lower bound on the reference prior objective to find the member of the family that serves as a good approximation. We experimentally demonstrate the method's effectiveness by recovering Jeffreys priors and learning the Variational Autoencoder's reference prior.
Cite
Text
Nalisnick and Smyth. "Learning Approximately Objective Priors." Conference on Uncertainty in Artificial Intelligence, 2017.Markdown
[Nalisnick and Smyth. "Learning Approximately Objective Priors." Conference on Uncertainty in Artificial Intelligence, 2017.](https://mlanthology.org/uai/2017/nalisnick2017uai-learning/)BibTeX
@inproceedings{nalisnick2017uai-learning,
title = {{Learning Approximately Objective Priors}},
author = {Nalisnick, Eric T. and Smyth, Padhraic},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2017},
url = {https://mlanthology.org/uai/2017/nalisnick2017uai-learning/}
}