Exact Convex Confidence-Weighted Learning

Abstract

Confidence-weighted (CW) learning [6], an online learning method for linear classifiers, maintains a Gaussian distributions over weight vectors, with a covariance matrix that represents uncertainty about weights and correlations. Confidence constraints ensure that a weight vector drawn from the hypothesis distribution correctly classifies examples with a specified probability. Within this framework, we derive a new convex form of the constraint and analyze it in the mistake bound model. Empirical evaluation with both synthetic and text data shows our version of CW learning achieves lower cumulative and out-of-sample errors than commonly used first-order and second-order online methods.

Cite

Text

Crammer et al. "Exact Convex Confidence-Weighted Learning." Neural Information Processing Systems, 2008.

Markdown

[Crammer et al. "Exact Convex Confidence-Weighted Learning." Neural Information Processing Systems, 2008.](https://mlanthology.org/neurips/2008/crammer2008neurips-exact/)

BibTeX

@inproceedings{crammer2008neurips-exact,
  title     = {{Exact Convex Confidence-Weighted Learning}},
  author    = {Crammer, Koby and Dredze, Mark and Pereira, Fernando},
  booktitle = {Neural Information Processing Systems},
  year      = {2008},
  pages     = {345-352},
  url       = {https://mlanthology.org/neurips/2008/crammer2008neurips-exact/}
}