Inference by Learning: Speeding-up Graphical Model Optimization via a Coarse-to-Fine Cascade of Pruning Classifiers
Abstract
We propose a general and versatile framework that significantly speeds-up graphical model optimization while maintaining an excellent solution accuracy. The proposed approach, refereed as Inference by Learning or IbyL, relies on a multi-scale pruning scheme that progressively reduces the solution space by use of a coarse-to-fine cascade of learnt classifiers. We thoroughly experiment with classic computer vision related MRF problems, where our novel framework constantly yields a significant time speed-up (with respect to the most efficient inference methods) and obtains a more accurate solution than directly optimizing the MRF. We make our code available on-line.
Cite
Text
Conejo et al. "Inference by Learning: Speeding-up Graphical Model Optimization via a Coarse-to-Fine Cascade of Pruning Classifiers." Neural Information Processing Systems, 2014.Markdown
[Conejo et al. "Inference by Learning: Speeding-up Graphical Model Optimization via a Coarse-to-Fine Cascade of Pruning Classifiers." Neural Information Processing Systems, 2014.](https://mlanthology.org/neurips/2014/conejo2014neurips-inference/)BibTeX
@inproceedings{conejo2014neurips-inference,
title = {{Inference by Learning: Speeding-up Graphical Model Optimization via a Coarse-to-Fine Cascade of Pruning Classifiers}},
author = {Conejo, Bruno and Komodakis, Nikos and Leprince, Sebastien and Avouac, Jean Philippe},
booktitle = {Neural Information Processing Systems},
year = {2014},
pages = {2105-2113},
url = {https://mlanthology.org/neurips/2014/conejo2014neurips-inference/}
}