Learning Ising Models with Independent Failures
Abstract
We give the first efficient algorithm for learning the structure of an Ising model that tolerates independent failures; that is, each entry of the observed sample is missing with some unknown probability $p$. Our algorithm matches the essentially optimal runtime and sample complexity bounds of recent work for learning Ising models due to Klivans and Meka (2017). We devise a novel unbiased estimator for the gradient of the Interaction Screening Objective (ISO) due to Vuffray et al. (2016) and apply a stochastic multiplicative gradient descent algorithm to minimize this objective. Solutions to this minimization recover the neighborhood information of the underlying Ising model on a node by node basis.
Cite
Text
Goel et al. "Learning Ising Models with Independent Failures." Conference on Learning Theory, 2019.Markdown
[Goel et al. "Learning Ising Models with Independent Failures." Conference on Learning Theory, 2019.](https://mlanthology.org/colt/2019/goel2019colt-learning/)BibTeX
@inproceedings{goel2019colt-learning,
title = {{Learning Ising Models with Independent Failures}},
author = {Goel, Surbhi and Kane, Daniel M. and Klivans, Adam R.},
booktitle = {Conference on Learning Theory},
year = {2019},
pages = {1449-1469},
volume = {99},
url = {https://mlanthology.org/colt/2019/goel2019colt-learning/}
}