Evidence That Incremental Delta-Bar-Delta Is an Attribute-Efficient Linear Learner

Abstract

The Winnow class of on-line linear learning algorithms [ 10 ],[ 11 ] was designed to be attribute-efficient. When learning with many irrelevant attributes, Winnow makes a number of errors that is only logarithmic in the number of total attributes, compared to the Perceptron algorithm, which makes a nearly linear number of errors. This paper presents data that argues that the Incremental Delta-Bar-Delta (IDBD) second-order gradient-descent algorithm [ 14 ] is attribute-efficient, performs similarly to Winnow on tasks with many irrelevant attributes, and also does better than Winnow on a task where Winnow does poorly. Preliminary analysis supports this empirical claim by showing that IDBD, like Winnow and other attribute-efficient algorithms, and unlike the Perceptron algorithm, has weights that can grow exponentially quickly. By virtue of its more flexible approach to weight updates, however, IDBD may be a more practically useful learning algorithm than Winnow.

Cite

Text

Harris. "Evidence That Incremental Delta-Bar-Delta Is an Attribute-Efficient Linear Learner." European Conference on Machine Learning, 2002. doi:10.1007/3-540-36755-1_12

Markdown

[Harris. "Evidence That Incremental Delta-Bar-Delta Is an Attribute-Efficient Linear Learner." European Conference on Machine Learning, 2002.](https://mlanthology.org/ecmlpkdd/2002/harris2002ecml-evidence/) doi:10.1007/3-540-36755-1_12

BibTeX

@inproceedings{harris2002ecml-evidence,
  title     = {{Evidence That Incremental Delta-Bar-Delta Is an Attribute-Efficient Linear Learner}},
  author    = {Harris, Harlan D.},
  booktitle = {European Conference on Machine Learning},
  year      = {2002},
  pages     = {135-147},
  doi       = {10.1007/3-540-36755-1_12},
  url       = {https://mlanthology.org/ecmlpkdd/2002/harris2002ecml-evidence/}
}