Exponentiated Gradient Algorithms for Large-Margin Structured Classification
Abstract
We consider the problem of structured classification, where the task is to predict a label y from an input x, and y has meaningful internal struc- ture. Our framework includes supervised training of Markov random fields and weighted context-free grammars as special cases. We describe an algorithm that solves the large-margin optimization problem defined in [12], using an exponential-family (Gibbs distribution) representation of structured objects. The algorithm is efficient—even in cases where the number of labels y is exponential in size—provided that certain expecta- tions under Gibbs distributions can be calculated efficiently. The method for structured labels relies on a more general result, specifically the ap- plication of exponentiated gradient updates [7, 8] to quadratic programs.
Cite
Text
Bartlett et al. "Exponentiated Gradient Algorithms for Large-Margin Structured Classification." Neural Information Processing Systems, 2004.Markdown
[Bartlett et al. "Exponentiated Gradient Algorithms for Large-Margin Structured Classification." Neural Information Processing Systems, 2004.](https://mlanthology.org/neurips/2004/bartlett2004neurips-exponentiated/)BibTeX
@inproceedings{bartlett2004neurips-exponentiated,
title = {{Exponentiated Gradient Algorithms for Large-Margin Structured Classification}},
author = {Bartlett, Peter L. and Collins, Michael and Taskar, Ben and McAllester, David A.},
booktitle = {Neural Information Processing Systems},
year = {2004},
pages = {113-120},
url = {https://mlanthology.org/neurips/2004/bartlett2004neurips-exponentiated/}
}