Newton-Type Methods for Inference in Higher-Order Markov Random Fields
Abstract
Linear programming relaxations are central to MAP in- ference in discrete Markov Random Fields. The ability to properly solve the Lagrangian dual is a critical component of such methods. In this paper, we study the benefit of us- ing Newton-type methods to solve the Lagrangian dual of a smooth version of the problem. We investigate their abil- ity to achieve superior convergence behavior and to bet- ter handle the ill-conditioned nature of the formulation, as compared to first order methods. We show that it is indeed possible to efficiently apply a trust region Newton method for a broad range of MAP inference problems. In this pa- per we propose a provably globally efficient framework that includes (i) excellent compromise between computational complexity and precision concerning the Hessian matrix construction, (ii) a damping strategy that aids efficient opti- mization, (iii) a truncation strategy coupled with a generic pre-conditioner for Conjugate Gradients, (iv) efficient sum- product computation for sparse clique potentials. Results for higher-order Markov Random Fields demonstrate the potential of this approach.
Cite
Text
Kannan et al. "Newton-Type Methods for Inference in Higher-Order Markov Random Fields." Conference on Computer Vision and Pattern Recognition, 2017. doi:10.1109/CVPR.2017.764Markdown
[Kannan et al. "Newton-Type Methods for Inference in Higher-Order Markov Random Fields." Conference on Computer Vision and Pattern Recognition, 2017.](https://mlanthology.org/cvpr/2017/kannan2017cvpr-newtontype/) doi:10.1109/CVPR.2017.764BibTeX
@inproceedings{kannan2017cvpr-newtontype,
title = {{Newton-Type Methods for Inference in Higher-Order Markov Random Fields}},
author = {Kannan, Hariprasad and Komodakis, Nikos and Paragios, Nikos},
booktitle = {Conference on Computer Vision and Pattern Recognition},
year = {2017},
doi = {10.1109/CVPR.2017.764},
url = {https://mlanthology.org/cvpr/2017/kannan2017cvpr-newtontype/}
}