Accelerated Training of Conditional Random Fields with Stochastic Gradient Methods
Abstract
We apply Stochastic Meta-Descent (SMD), a stochastic gradient optimization method with gain vector adaptation, to the training of Conditional Random Fields (CRFs). On several large data sets, the resulting optimizer converges to the same quality of solution over an order of magnitude faster than limited-memory BFGS, the leading method reported to date. We report results for both exact and inexact inference techniques.
Cite
Text
Vishwanathan et al. "Accelerated Training of Conditional Random Fields with Stochastic Gradient Methods." International Conference on Machine Learning, 2006. doi:10.1145/1143844.1143966Markdown
[Vishwanathan et al. "Accelerated Training of Conditional Random Fields with Stochastic Gradient Methods." International Conference on Machine Learning, 2006.](https://mlanthology.org/icml/2006/vishwanathan2006icml-accelerated/) doi:10.1145/1143844.1143966BibTeX
@inproceedings{vishwanathan2006icml-accelerated,
title = {{Accelerated Training of Conditional Random Fields with Stochastic Gradient Methods}},
author = {Vishwanathan, S. V. N. and Schraudolph, Nicol N. and Schmidt, Mark W. and Murphy, Kevin P.},
booktitle = {International Conference on Machine Learning},
year = {2006},
pages = {969-976},
doi = {10.1145/1143844.1143966},
url = {https://mlanthology.org/icml/2006/vishwanathan2006icml-accelerated/}
}