Learning Maximum-a-Posteriori Perturbation Models for Structured Prediction in Polynomial Time
Abstract
MAP perturbation models have emerged as a powerful framework for inference in structured prediction. Such models provide a way to efficiently sample from the Gibbs distribution and facilitate predictions that are robust to random noise. In this paper, we propose a provably polynomial time randomized algorithm for learning the parameters of perturbed MAP predictors. Our approach is based on minimizing a novel Rademacher-based generalization bound on the expected loss of a perturbed MAP predictor, which can be computed in polynomial time. We obtain conditions under which our randomized learning algorithm can guarantee generalization to unseen examples.
Cite
Text
Ghoshal and Honorio. "Learning Maximum-a-Posteriori Perturbation Models for Structured Prediction in Polynomial Time." International Conference on Machine Learning, 2018.Markdown
[Ghoshal and Honorio. "Learning Maximum-a-Posteriori Perturbation Models for Structured Prediction in Polynomial Time." International Conference on Machine Learning, 2018.](https://mlanthology.org/icml/2018/ghoshal2018icml-learning/)BibTeX
@inproceedings{ghoshal2018icml-learning,
title = {{Learning Maximum-a-Posteriori Perturbation Models for Structured Prediction in Polynomial Time}},
author = {Ghoshal, Asish and Honorio, Jean},
booktitle = {International Conference on Machine Learning},
year = {2018},
pages = {1754-1762},
volume = {80},
url = {https://mlanthology.org/icml/2018/ghoshal2018icml-learning/}
}