Learning the Right Model: Efficient Max-Margin Learning in Laplacian CRFs
Abstract
An important modeling decision made while designing Conditional Random Fields (CRFs) is the choice of the potential functions over the cliques of variables. Laplacian potentials are useful because they are robust potentials and match image statistics better than Gaussians. Moreover, energies with Laplacian terms remain convex, which simplifies inference. This makes Laplacian potentials an ideal modeling choice for some applications. In this paper, we study max-margin parameter learning in CRFs with Laplacian potentials (LCRFs). We first show that structured hinge-loss [35] is non-convex for LCRFs and thus techniques used by previous works are not applicable. We then present the first approximate max-margin algorithm for LCRFs. Finally, we make our learning algorithm scalable in the number of training images by using dual-decomposition techniques. Our experiments on single-image depth estimation show that even with simple features, our approach achieves comparable to state-of-art results.
Cite
Text
Batra and Saxena. "Learning the Right Model: Efficient Max-Margin Learning in Laplacian CRFs." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2012. doi:10.1109/CVPR.2012.6247920Markdown
[Batra and Saxena. "Learning the Right Model: Efficient Max-Margin Learning in Laplacian CRFs." IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2012.](https://mlanthology.org/cvpr/2012/batra2012cvpr-learning/) doi:10.1109/CVPR.2012.6247920BibTeX
@inproceedings{batra2012cvpr-learning,
title = {{Learning the Right Model: Efficient Max-Margin Learning in Laplacian CRFs}},
author = {Batra, Dhruv and Saxena, Ashutosh},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year = {2012},
pages = {2136-2143},
doi = {10.1109/CVPR.2012.6247920},
url = {https://mlanthology.org/cvpr/2012/batra2012cvpr-learning/}
}