Approximate Inference for the Loss-Calibrated Bayesian
Abstract
We consider the problem of approximate inference in the context of Bayesian decision theory. Traditional approaches focus on approximating general properties of the posterior, ignoring the decision task – and associated losses – for which the posterior could be used. We argue that this can be suboptimal and propose instead to loss-calibrate the approximate inference methods with respect to the decision task at hand. We present a general framework rooted in Bayesian decision theory to analyze approximate inference from the perspective of losses, opening up several research directions. As a first loss-calibrated approximate inference attempt, we propose an EM-like algorithm on the Bayesian posterior risk and show how it can improve a standard approach to Gaussian process classification when losses are asymmetric.
Cite
Text
Lacoste–Julien et al. "Approximate Inference for the Loss-Calibrated Bayesian." Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011.Markdown
[Lacoste–Julien et al. "Approximate Inference for the Loss-Calibrated Bayesian." Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011.](https://mlanthology.org/aistats/2011/lacostejulien2011aistats-approximate/)BibTeX
@inproceedings{lacostejulien2011aistats-approximate,
title = {{Approximate Inference for the Loss-Calibrated Bayesian}},
author = {Lacoste–Julien, Simon and Huszár, Ferenc and Ghahramani, Zoubin},
booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics},
year = {2011},
pages = {416-424},
volume = {15},
url = {https://mlanthology.org/aistats/2011/lacostejulien2011aistats-approximate/}
}