Outlier-Robust Learning of Ising Models Under Dobrushin’s Condition
Abstract
We study the problem of learning Ising models satisfying Dobrushin’s condition in the outlier-robust setting where a constant fraction of the samples are adversarially corrupted. Our main result is to provide the first computationally efficient robust learning algorithm for this problem with near-optimal error guarantees. Our algorithm can be seen as a special case of an algorithm for robustly learning a distribution from a general exponential family. To prove its correctness for Ising models, we establish new anti-concentration results for degree-2 polynomials of Ising models that may be of independent interest.
Cite
Text
Diakonikolas et al. "Outlier-Robust Learning of Ising Models Under Dobrushin’s Condition." Conference on Learning Theory, 2021.Markdown
[Diakonikolas et al. "Outlier-Robust Learning of Ising Models Under Dobrushin’s Condition." Conference on Learning Theory, 2021.](https://mlanthology.org/colt/2021/diakonikolas2021colt-outlierrobust/)BibTeX
@inproceedings{diakonikolas2021colt-outlierrobust,
title = {{Outlier-Robust Learning of Ising Models Under Dobrushin’s Condition}},
author = {Diakonikolas, Ilias and Kane, Daniel M. and Stewart, Alistair and Sun, Yuxin},
booktitle = {Conference on Learning Theory},
year = {2021},
pages = {1645-1682},
volume = {134},
url = {https://mlanthology.org/colt/2021/diakonikolas2021colt-outlierrobust/}
}