Stochastic Learning for Sparse Discrete Markov Random Fields with Controlled Gradient Approximation Error

Abstract

We study the L 1-regularized maximum likelihood estimator/estimation (MLE) problemfor discrete Markov random fields (MRFs), where efficient and scalable learning requires both sparse regularization and approximate inference. To address these challenges, we consider a stochastic learning framework called stochastic proximal gradient (SPG; Honorio 2012a, Atchade etal. 2014, Miasojedow and Rejchel 2016). SPG is an inexact proximal gradient algorithm [Schmidt et al., 2011], whose inexactness stems from the stochastic oracle (Gibbs sampling) for gradient approximation - exact gradient evaluation is infeasible in general due to the NP-hard inference problem for discrete MRFs [Koller and Friedman, 2009]. Theoretically, we provide novel verifiable bounds to inspect and control the quality of gradient approximation. Empirically, we propose the tighten asymptotically (TAY) learning strategy based on the verifiable bounds to boost the performance of SPG.

Cite

Text

Geng et al. "Stochastic Learning for Sparse Discrete Markov Random Fields with Controlled Gradient Approximation Error." Conference on Uncertainty in Artificial Intelligence, 2018.

Markdown

[Geng et al. "Stochastic Learning for Sparse Discrete Markov Random Fields with Controlled Gradient Approximation Error." Conference on Uncertainty in Artificial Intelligence, 2018.](https://mlanthology.org/uai/2018/geng2018uai-stochastic/)

BibTeX

@inproceedings{geng2018uai-stochastic,
  title     = {{Stochastic Learning for Sparse Discrete Markov Random Fields with Controlled Gradient Approximation Error}},
  author    = {Geng, Sinong and Kuang, Zhaobin and Liu, Jie and Wright, Stephen J. and Page, David},
  booktitle = {Conference on Uncertainty in Artificial Intelligence},
  year      = {2018},
  pages     = {156-166},
  url       = {https://mlanthology.org/uai/2018/geng2018uai-stochastic/}
}