Mistake Bounds for Maximum Entropy Discrimination

Abstract

We establish a mistake bound for an ensemble method for classification based on maximizing the entropy of voting weights subject to margin constraints. The bound is the same as a general bound proved for the Weighted Majority Algorithm, and similar to bounds for other variants of Winnow. We prove a more refined bound that leads to a nearly opti- mal algorithm for learning disjunctions, again, based on the maximum entropy principle. We describe a simplification of the on-line maximum entropy method in which, after each iteration, the margin constraints are replaced with a single linear inequality. The simplified algorithm, which takes a similar form to Winnow, achieves the same mistake bounds.

Cite

Text

Long and Wu. "Mistake Bounds for Maximum Entropy Discrimination." Neural Information Processing Systems, 2004.

Markdown

[Long and Wu. "Mistake Bounds for Maximum Entropy Discrimination." Neural Information Processing Systems, 2004.](https://mlanthology.org/neurips/2004/long2004neurips-mistake/)

BibTeX

@inproceedings{long2004neurips-mistake,
  title     = {{Mistake Bounds for Maximum Entropy Discrimination}},
  author    = {Long, Philip M. and Wu, Xinyu},
  booktitle = {Neural Information Processing Systems},
  year      = {2004},
  pages     = {833-840},
  url       = {https://mlanthology.org/neurips/2004/long2004neurips-mistake/}
}