Learning Linear Functions with Quadratic and Linear Multiplicative Updates

Abstract

We analyze variations of multiplicative updates for learning linear functions online. These can be described as substituting exponentiation in the Exponentiated Gradient (EG) algorithm with quadratic and linear functions. Both kinds of updates substitute exponentiation with simpler operations and reduce dependence on the parameter that specifies the sum of the weights during learning. In particular, the linear multiplicative update places no restrictions on the sum of the weights, and, under a wide range of conditions, achieves worst-case behavior close to the EG algorithm. We perform our analysis for square loss and absolute loss, and for regression and classification. We also describe some experiments showing that the performance of our algorithms are comparable to EG and the $p$-norm algorithm.

Cite

Text

Bylander. "Learning Linear Functions with Quadratic and Linear Multiplicative Updates." International Conference on Machine Learning, 2011.

Markdown

[Bylander. "Learning Linear Functions with Quadratic and Linear Multiplicative Updates." International Conference on Machine Learning, 2011.](https://mlanthology.org/icml/2011/bylander2011icml-learning/)

BibTeX

@inproceedings{bylander2011icml-learning,
  title     = {{Learning Linear Functions with Quadratic and Linear Multiplicative Updates}},
  author    = {Bylander, Tom},
  booktitle = {International Conference on Machine Learning},
  year      = {2011},
  pages     = {505-512},
  url       = {https://mlanthology.org/icml/2011/bylander2011icml-learning/}
}