Adversarial Interpretation of Bayesian Inference
Abstract
We build on the optimization-centric view on Bayesian inference advocated by Knoblauch et al. (2019). Thinking about Bayesian and generalized Bayesian posteriors as the solutions to a regularized minimization problem allows us to answer an intriguing question: If minimization is the primal problem, then what is its dual? By deriving the Fenchel dual of the problem, we demonstrate that this dual corresponds to an adversarial game: In the dual space, the prior becomes the cost function for an adversary that seeks to perturb the likelihood [loss] function targeted by standard [generalized] Bayesian inference. This implies that Bayes-like procedures are adversarially robust—providing another firm theoretical foundation for their empirical performance. Our contributions are foundational, and apply to a wide-ranging set of Machine Learning methods. This includes standard Bayesian inference, generalized Bayesian and Gibbs posteriors (Bissiri et al., 2016), as well as a diverse set of other methods including Generalized Variational Inference (Knoblauch et al., 2019) and the Wasserstein Autoencoder (Tolstikhin et al., 2017).
Cite
Text
Husain and Knoblauch. "Adversarial Interpretation of Bayesian Inference." Proceedings of The 33rd International Conference on Algorithmic Learning Theory, 2022.Markdown
[Husain and Knoblauch. "Adversarial Interpretation of Bayesian Inference." Proceedings of The 33rd International Conference on Algorithmic Learning Theory, 2022.](https://mlanthology.org/alt/2022/husain2022alt-adversarial/)BibTeX
@inproceedings{husain2022alt-adversarial,
title = {{Adversarial Interpretation of Bayesian Inference}},
author = {Husain, Hisham and Knoblauch, Jeremias},
booktitle = {Proceedings of The 33rd International Conference on Algorithmic Learning Theory},
year = {2022},
pages = {553-572},
volume = {167},
url = {https://mlanthology.org/alt/2022/husain2022alt-adversarial/}
}