Learning to Pass Expectation Propagation Messages

Abstract

Expectation Propagation (EP) is a popular approximate posterior inference algorithm that often provides a fast and accurate alternative to sampling-based methods. However, while the EP framework in theory allows for complex non-Gaussian factors, there is still a significant practical barrier to using them within EP, because doing so requires the implementation of message update operators, which can be difficult and require hand-crafted approximations. In this work, we study the question of whether it is possible to automatically derive fast and accurate EP updates by learning a discriminative model e.g., a neural network or random forest) to map EP message inputs to EP message outputs. We address the practical concerns that arise in the process, and we provide empirical analysis on several challenging and diverse factors, indicating that there is a space of factors where this approach appears promising.

Cite

Text

Heess et al. "Learning to Pass Expectation Propagation Messages." Neural Information Processing Systems, 2013.

Markdown

[Heess et al. "Learning to Pass Expectation Propagation Messages." Neural Information Processing Systems, 2013.](https://mlanthology.org/neurips/2013/heess2013neurips-learning/)

BibTeX

@inproceedings{heess2013neurips-learning,
  title     = {{Learning to Pass Expectation Propagation Messages}},
  author    = {Heess, Nicolas and Tarlow, Daniel and Winn, John},
  booktitle = {Neural Information Processing Systems},
  year      = {2013},
  pages     = {3219-3227},
  url       = {https://mlanthology.org/neurips/2013/heess2013neurips-learning/}
}