Learning and Inference via Maximum Inner Product Search
Abstract
A large class of commonly used probabilistic models known as log-linear models are defined up to a normalization constant.Typical learning algorithms for such models require solving a sequence of probabilistic inference queries. These inferences are typically intractable, and are a major bottleneck for learning models with large output spaces. In this paper, we provide a new approach for amortizing the cost of a sequence of related inference queries, such as the ones arising during learning. Our technique relies on a surprising connection with algorithms developed in the past two decades for similarity search in large data bases. Our approach achieves improved running times with provable approximation guarantees. We show that it performs well both on synthetic data and neural language models with large output spaces.
Cite
Text
Mussmann and Ermon. "Learning and Inference via Maximum Inner Product Search." International Conference on Machine Learning, 2016.Markdown
[Mussmann and Ermon. "Learning and Inference via Maximum Inner Product Search." International Conference on Machine Learning, 2016.](https://mlanthology.org/icml/2016/mussmann2016icml-learning/)BibTeX
@inproceedings{mussmann2016icml-learning,
title = {{Learning and Inference via Maximum Inner Product Search}},
author = {Mussmann, Stephen and Ermon, Stefano},
booktitle = {International Conference on Machine Learning},
year = {2016},
pages = {2587-2596},
volume = {48},
url = {https://mlanthology.org/icml/2016/mussmann2016icml-learning/}
}