Risk Bounds and Learning Algorithms for the Regression Approach to Structured Output Prediction
Abstract
We provide rigorous guarantees for the regression approach to structured output prediction. We show that the quadratic regression loss is a convex surrogate of the prediction loss when the output kernel satisfies some condition with respect to the prediction loss. We provide two upper bounds of the prediction risk that depend on the empirical quadratic risk of the predictor. The minimizer of the first bound is the predictor proposed by Cortes et al. (2007) while the minimizer of the second bound is a predictor that has never been proposed so far. Both predictors are compared on practical tasks.
Cite
Text
Giguère et al. "Risk Bounds and Learning Algorithms for the Regression Approach to Structured Output Prediction." International Conference on Machine Learning, 2013.Markdown
[Giguère et al. "Risk Bounds and Learning Algorithms for the Regression Approach to Structured Output Prediction." International Conference on Machine Learning, 2013.](https://mlanthology.org/icml/2013/giguere2013icml-risk/)BibTeX
@inproceedings{giguere2013icml-risk,
title = {{Risk Bounds and Learning Algorithms for the Regression Approach to Structured Output Prediction}},
author = {Giguère, Sébastien and Laviolette, François and Marchand, Mario and Sylla, Khadidja},
booktitle = {International Conference on Machine Learning},
year = {2013},
pages = {107-114},
volume = {28},
url = {https://mlanthology.org/icml/2013/giguere2013icml-risk/}
}