Convergence Rates of Biased Stochastic Optimization for Learning Sparse Ising Models
Abstract
We study the convergence rate of stochastic optimization of exact (NP-hard) objectives, for which only biased estimates of the gradient are available. We motivate this problem in the context of learning the structure and parameters of Ising models. We first provide a convergence-rate analysis of deterministic errors for forward-backward splitting (FBS). We then extend our analysis to biased stochastic errors, by first characterizing a family of samplers and providing a high probability bound that allows understanding not only FBS, but also proximal gradient (PG) methods. We derive some interesting conclusions: FBS requires only a logarithmically increasing number of random samples in order to converge (although at a very low rate); the required number of random samples is the same for the deterministic and the biased stochastic setting for FBS and basic PG; accelerated PG is not guaranteed to converge in the biased stochastic setting.
Cite
Text
Honorio. "Convergence Rates of Biased Stochastic Optimization for Learning Sparse Ising Models." International Conference on Machine Learning, 2012.Markdown
[Honorio. "Convergence Rates of Biased Stochastic Optimization for Learning Sparse Ising Models." International Conference on Machine Learning, 2012.](https://mlanthology.org/icml/2012/honorio2012icml-convergence/)BibTeX
@inproceedings{honorio2012icml-convergence,
title = {{Convergence Rates of Biased Stochastic Optimization for Learning Sparse Ising Models}},
author = {Honorio, Jean},
booktitle = {International Conference on Machine Learning},
year = {2012},
url = {https://mlanthology.org/icml/2012/honorio2012icml-convergence/}
}