Stochastic Integration via Error-Correcting Codes
Abstract
We consider the task of summing a non-negative function f over a discrete set Ω, e.g., to compute the partition function of a graphical model. Ermon et al. have shown that, in a probabilistic approximate sense, summation can be reduced to maximizing f over random subsets of Ω defined by parity (XOR) constraints. Unfortunately, XORs with many variables are computationally problematic, while XORs with few variables have no guarantees. We introduce two ideas to address this problem, both motivated by the theory of error-correcting codes. The first is to maximize f over explicitly generated random affine subspaces of Ω, which is equivalent to unconstrained maximization of f over an exponentially smaller domain. The second idea, closer in spirit to the original approach, is to use systems of linear equations defining error-correcting codes. Even though the equations in such systems only contain O(1) variables each, their sets of solutions (codewords) have excellent statistical properties. By combining these ideas we achieve 100x or greater speedup over the original approach and, perhaps more importantly, levels of accuracy that were completely unattainable.
Cite
Text
Achlioptas and Jiang. "Stochastic Integration via Error-Correcting Codes." Conference on Uncertainty in Artificial Intelligence, 2015.Markdown
[Achlioptas and Jiang. "Stochastic Integration via Error-Correcting Codes." Conference on Uncertainty in Artificial Intelligence, 2015.](https://mlanthology.org/uai/2015/achlioptas2015uai-stochastic/)BibTeX
@inproceedings{achlioptas2015uai-stochastic,
title = {{Stochastic Integration via Error-Correcting Codes}},
author = {Achlioptas, Dimitris and Jiang, Pei},
booktitle = {Conference on Uncertainty in Artificial Intelligence},
year = {2015},
pages = {22-31},
url = {https://mlanthology.org/uai/2015/achlioptas2015uai-stochastic/}
}