Residual Splash for Optimally Parallelizing Belief Propagation
Abstract
As computer architectures move towards parallelism we must build a new theoretical understanding of parallelism in machine learning. In this paper we focus on parallelizing message passing inference algorithms in graphical models. We develop a theoretical understanding of the limitations of parallelism in belief propagation and bound the optimal achievable running parallel performance on a certain class of graphical models. We demonstrate that the fully synchronous parallelization of belief propagation is highly inefficient. We provide a new parallel belief propagation which achieves optimal performance on a certain class of graphical models. Using two challenging real-world problems, we empirically evaluate the performance of our algorithm. On the real-world problems, we find that our new algorithm achieves near linear performance improvements and out performs alternative parallel belief propagation algorithms.
Cite
Text
Gonzalez et al. "Residual Splash for Optimally Parallelizing Belief Propagation." Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, 2009.Markdown
[Gonzalez et al. "Residual Splash for Optimally Parallelizing Belief Propagation." Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, 2009.](https://mlanthology.org/aistats/2009/gonzalez2009aistats-residual/)BibTeX
@inproceedings{gonzalez2009aistats-residual,
title = {{Residual Splash for Optimally Parallelizing Belief Propagation}},
author = {Gonzalez, Joseph and Low, Yucheng and Guestrin, Carlos},
booktitle = {Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics},
year = {2009},
pages = {177-184},
volume = {5},
url = {https://mlanthology.org/aistats/2009/gonzalez2009aistats-residual/}
}