Learning Nearest-Neighbor Quantizers from Labeled Data by Information Loss Minimization

Abstract

Markov Random Fields (MRFs) are used in a large array of computer vision and maching learning applications. Finding the Maximum Aposteriori (MAP) solution of an MRF is in general intractable, and one has to resort to approximate solutions, such as Belief Propagation, Graph Cuts, or more recently, approaches based on quadratic programming. We propose a novel type of approximation, Spectral relaxation to Quadratic Programming (SQP). We show our method offers tighter bounds than recently published work, while at the same time being computationally efficient. We compare our method to other algorithms on random MRFs in various settings.

Cite

Text

Lazebnik and Raginsky. "Learning Nearest-Neighbor Quantizers from Labeled Data by Information Loss Minimization." Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007.

Markdown

[Lazebnik and Raginsky. "Learning Nearest-Neighbor Quantizers from Labeled Data by Information Loss Minimization." Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007.](https://mlanthology.org/aistats/2007/lazebnik2007aistats-learning/)

BibTeX

@inproceedings{lazebnik2007aistats-learning,
  title     = {{Learning Nearest-Neighbor Quantizers from Labeled Data by Information Loss Minimization}},
  author    = {Lazebnik, Svetlana and Raginsky, Maxim},
  booktitle = {Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics},
  year      = {2007},
  pages     = {251-258},
  volume    = {2},
  url       = {https://mlanthology.org/aistats/2007/lazebnik2007aistats-learning/}
}